All the vulnerabilites related to Siemens - SINEC INS
var-202102-1490
Vulnerability from variot
OpenSSL 1.0.2 supports SSLv2. If a client attempts to negotiate SSLv2 with a server that is configured to support both SSLv2 and more recent SSL and TLS versions then a check is made for a version rollback attack when unpadding an RSA signature. Clients that support SSL or TLS versions greater than SSLv2 are supposed to use a special form of padding. A server that supports greater than SSLv2 is supposed to reject connection attempts from a client where this special form of padding is present, because this indicates that a version rollback has occurred (i.e. both client and server support greater than SSLv2, and yet this is the version that is being requested). The implementation of this padding check inverted the logic so that the connection attempt is accepted if the padding is present, and rejected if it is absent. This means that such as server will accept a connection if a version rollback attack has occurred. Further the server will erroneously reject a connection if a normal SSLv2 connection attempt is made. Only OpenSSL 1.0.2 servers from version 1.0.2s to 1.0.2x are affected by this issue. In order to be vulnerable a 1.0.2 server must: 1) have configured SSLv2 support at compile time (this is off by default), 2) have configured SSLv2 support at runtime (this is off by default), 3) have configured SSLv2 ciphersuites (these are not in the default ciphersuite list) OpenSSL 1.1.1 does not have SSLv2 support and therefore is not vulnerable to this issue. The underlying error is in the implementation of the RSA_padding_check_SSLv23() function. This also affects the RSA_SSLV23_PADDING padding mode used by various other functions. Although 1.1.1 does not support SSLv2 the RSA_padding_check_SSLv23() function still exists, as does the RSA_SSLV23_PADDING padding mode. Applications that directly call that function or use that padding mode will encounter this issue. However since there is no support for the SSLv2 protocol in 1.1.1 this is considered a bug and not a security issue in that version. OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.0.2y (Affected 1.0.2s-1.0.2x). OpenSSL There is a security level vulnerability in.Information may be tampered with. Pillow is a Python-based image processing library. There is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: Red Hat Advanced Cluster Management for Kubernetes version 2.3 Advisory ID: RHSA-2021:3016-01 Product: Red Hat ACM Advisory URL: https://access.redhat.com/errata/RHSA-2021:3016 Issue date: 2021-08-05 CVE Names: CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-2708 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20934 CVE-2019-25013 CVE-2020-1730 CVE-2020-8231 CVE-2020-8284 CVE-2020-8285 CVE-2020-8286 CVE-2020-8927 CVE-2020-11668 CVE-2020-13434 CVE-2020-15358 CVE-2020-27618 CVE-2020-28196 CVE-2020-28469 CVE-2020-28500 CVE-2020-28851 CVE-2020-28852 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3326 CVE-2021-3377 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3560 CVE-2021-20271 CVE-2021-20305 CVE-2021-21272 CVE-2021-21309 CVE-2021-21321 CVE-2021-21322 CVE-2021-23337 CVE-2021-23343 CVE-2021-23346 CVE-2021-23362 CVE-2021-23364 CVE-2021-23368 CVE-2021-23369 CVE-2021-23382 CVE-2021-23383 CVE-2021-23839 CVE-2021-23840 CVE-2021-23841 CVE-2021-25217 CVE-2021-27219 CVE-2021-27292 CVE-2021-27358 CVE-2021-28092 CVE-2021-28918 CVE-2021-29418 CVE-2021-29477 CVE-2021-29478 CVE-2021-29482 CVE-2021-32399 CVE-2021-33033 CVE-2021-33034 CVE-2021-33502 CVE-2021-33623 CVE-2021-33909 CVE-2021-33910 =====================================================================
- Summary:
Red Hat Advanced Cluster Management for Kubernetes 2.3.0 General Availability release images, which fix several bugs and security issues.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE links in the References section.
- Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
This advisory contains the container images for Red Hat Advanced Cluster Management for Kubernetes, which fix several bugs and security issues. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/
Security:
-
fastify-reply-from: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21321)
-
fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21322)
-
nodejs-netmask: improper input validation of octal input data (CVE-2021-28918)
-
redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)
-
redis: Integer overflow via COPY command for large intsets (CVE-2021-29478)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing
-
-u- extension (CVE-2020-28851)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)
-
oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)
-
redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
nodejs-lodash: command injection via template (CVE-2021-23337)
-
nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() (CVE-2021-23362)
-
browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) (CVE-2021-23364)
-
nodejs-postcss: Regular expression denial of service during source map parsing (CVE-2021-23368)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js (CVE-2021-23382)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
-
nodejs-ua-parser-js: ReDoS via malicious User-Agent header (CVE-2021-27292)
-
grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call (CVE-2021-27358)
-
nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)
-
nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character (CVE-2021-29418)
-
ulikunitz/xz: Infinite loop in readUvarint allows for denial of service (CVE-2021-29482)
-
normalize-url: ReDoS for data URLs (CVE-2021-33502)
-
nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
-
html-parse-stringify: Regular Expression DoS (CVE-2021-23346)
-
openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)
For more details about the security issues, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE pages listed in the References section.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html-single/install/index#installing
- Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
- References:
https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-2708 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20934 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-8231 https://access.redhat.com/security/cve/CVE-2020-8284 https://access.redhat.com/security/cve/CVE-2020-8285 https://access.redhat.com/security/cve/CVE-2020-8286 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-11668 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-28196 https://access.redhat.com/security/cve/CVE-2020-28469 https://access.redhat.com/security/cve/CVE-2020-28500 https://access.redhat.com/security/cve/CVE-2020-28851 https://access.redhat.com/security/cve/CVE-2020-28852 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3377 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3560 https://access.redhat.com/security/cve/CVE-2021-20271 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21272 https://access.redhat.com/security/cve/CVE-2021-21309 https://access.redhat.com/security/cve/CVE-2021-21321 https://access.redhat.com/security/cve/CVE-2021-21322 https://access.redhat.com/security/cve/CVE-2021-23337 https://access.redhat.com/security/cve/CVE-2021-23343 https://access.redhat.com/security/cve/CVE-2021-23346 https://access.redhat.com/security/cve/CVE-2021-23362 https://access.redhat.com/security/cve/CVE-2021-23364 https://access.redhat.com/security/cve/CVE-2021-23368 https://access.redhat.com/security/cve/CVE-2021-23369 https://access.redhat.com/security/cve/CVE-2021-23382 https://access.redhat.com/security/cve/CVE-2021-23383 https://access.redhat.com/security/cve/CVE-2021-23839 https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/cve/CVE-2021-25217 https://access.redhat.com/security/cve/CVE-2021-27219 https://access.redhat.com/security/cve/CVE-2021-27292 https://access.redhat.com/security/cve/CVE-2021-27358 https://access.redhat.com/security/cve/CVE-2021-28092 https://access.redhat.com/security/cve/CVE-2021-28918 https://access.redhat.com/security/cve/CVE-2021-29418 https://access.redhat.com/security/cve/CVE-2021-29477 https://access.redhat.com/security/cve/CVE-2021-29478 https://access.redhat.com/security/cve/CVE-2021-29482 https://access.redhat.com/security/cve/CVE-2021-32399 https://access.redhat.com/security/cve/CVE-2021-33033 https://access.redhat.com/security/cve/CVE-2021-33034 https://access.redhat.com/security/cve/CVE-2021-33502 https://access.redhat.com/security/cve/CVE-2021-33623 https://access.redhat.com/security/cve/CVE-2021-33909 https://access.redhat.com/security/cve/CVE-2021-33910 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYQyKDNzjgjWX9erEAQhAWQ//fU2h/y+76CVkExXChhgJ779lC9Ec1f+X 6yw1b2WCHcztbTwyRtZw90dvIA1rNIDBrd83jIwfzsXzxEfGcCTriOmotHKX44+4 w6uPpmPSOBTsXB/yV/kvbPWpUKkahITC2uvjaInzO2zMmUQ2ntNGpvPu7BbFLmL1 oHMVIZaJ+zrPifwPhGqlp3rAkYe6uGobdvwtrOMXw8L5VnJor+35xLjos5k30IlC 4lftpWm9cD4oozdb5hw4A0i8fyAvue4hzpmgPfUJ6bngux8wycYhPGiRJR1HX03T MSXsWNBtqXNcB7r/GGqen73rr/eyyqsqfJ7+l8Uu7ph5cjk04foZcMqg+rz/1xne gVPkWcUJT8j7BH2sO8qiMdfYNl3+xNqPI9MtPEI8K/eiwynwETZqsKnEGIyhcTcX xe08Io2jV3jlnpQO/SBcvpKyzcqhDOuNBH2ozhn7Ka68WIMk2OuWempQcyDlWizO 1UbgoiMVb0hlP0APVpJKNtpfFCjBzFC24gWSAOPTep3vzA418Sn/moCJupM+3PPA QIzkGAt9f7sffI0JEg0JPEy0/aTmfsPm7XeR6DG+xF7o1nfy1SOcf+tcnPD0K+z8 8fS0uUMB/wO2s5yQ1TctsYzL9S5HRwMtnq7qKwWq9ItYzdQB4pcmyK1WgJAHVAtf Omk9Hj44tdI= =X9lR -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . OpenSSL Security Advisory [16 February 2021] ============================================
Null pointer deref in X509_issuer_and_serial_hash() (CVE-2021-23841)
Severity: Moderate
The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack.
This issue was reported to OpenSSL on 15th December 2020 by Tavis Ormandy from Google. The fix was developed by Matt Caswell.
Incorrect SSLv2 rollback protection (CVE-2021-23839)
Severity: Low
OpenSSL 1.0.2 supports SSLv2.
This issue was reported to OpenSSL on 21st January 2021 by D. Katz and Joel Luellwitz from Trustwave. The fix was developed by Matt Caswell.
Integer overflow in CipherUpdate (CVE-2021-23840)
Severity: Low
Calls to EVP_CipherUpdate, EVP_EncryptUpdate and EVP_DecryptUpdate may overflow the output length argument in some cases where the input length is close to the maximum permissable length for an integer on the platform. In such cases the return value from the function call will be 1 (indicating success), but the output length value will be negative. This could cause applications to behave incorrectly or crash.
This issue was reported to OpenSSL on 13th December 2020 by Paul Kehrer. The fix was developed by Matt Caswell.
References
URL for this Security Advisory: https://www.openssl.org/news/secadv/20210216.txt
Note: the online version of the advisory may be updated with additional details over time.
For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1490", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise manager ops center", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.4.0.0" }, { "model": "openssl", "scope": "lte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2x" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2s" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.5.0.0.0" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.3.1.2" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.3.5" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.9.0.0.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.0.0.2" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.4.0" }, { "model": "zfs storage appliance kit", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.8" }, { "model": "enterprise manager for storage management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.3.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "oracle graalvm", "scope": null, "trust": 0.8, "vendor": "\u30aa\u30e9\u30af\u30eb", "version": null }, { "model": "oracle enterprise manager ops center", "scope": null, "trust": 0.8, "vendor": "\u30aa\u30e9\u30af\u30eb", "version": null }, { "model": "openssl", "scope": null, "trust": 0.8, "vendor": "openssl", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "db": "NVD", "id": "CVE-2021-23839" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "1.0.2x", "versionStartIncluding": "1.0.2s", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.3.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.4.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:5.5.0.0.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_for_storage_management:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:zfs_storage_appliance_kit:8.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:19.3.5:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:20.3.1.2:*:*:*:community:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:21.0.0.2:*:*:*:community:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:5.9.0.0.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-23839" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Siemens reported these vulnerabilities to CISA.", "sources": [ { "db": "CNNVD", "id": "CNNVD-202102-1230" } ], "trust": 0.6 }, "cve": "CVE-2021-23839", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-23839", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 3.7, "baseSeverity": "LOW", "confidentialityImpact": "NONE", "exploitabilityScore": 2.2, "impactScore": 1.4, "integrityImpact": "LOW", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:L/A:N", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 3.7, "baseSeverity": "Low", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-23839", "impactScore": null, "integrityImpact": "Low", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:L/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-23839", "trust": 1.8, "value": "LOW" }, { "author": "CNNVD", "id": "CNNVD-202104-975", "trust": 0.6, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202102-1230", "trust": 0.6, "value": "LOW" }, { "author": "VULMON", "id": "CVE-2021-23839", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-23839" }, { "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "db": "NVD", "id": "CVE-2021-23839" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202102-1230" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL 1.0.2 supports SSLv2. If a client attempts to negotiate SSLv2 with a server that is configured to support both SSLv2 and more recent SSL and TLS versions then a check is made for a version rollback attack when unpadding an RSA signature. Clients that support SSL or TLS versions greater than SSLv2 are supposed to use a special form of padding. A server that supports greater than SSLv2 is supposed to reject connection attempts from a client where this special form of padding is present, because this indicates that a version rollback has occurred (i.e. both client and server support greater than SSLv2, and yet this is the version that is being requested). The implementation of this padding check inverted the logic so that the connection attempt is accepted if the padding is present, and rejected if it is absent. This means that such as server will accept a connection if a version rollback attack has occurred. Further the server will erroneously reject a connection if a normal SSLv2 connection attempt is made. Only OpenSSL 1.0.2 servers from version 1.0.2s to 1.0.2x are affected by this issue. In order to be vulnerable a 1.0.2 server must: 1) have configured SSLv2 support at compile time (this is off by default), 2) have configured SSLv2 support at runtime (this is off by default), 3) have configured SSLv2 ciphersuites (these are not in the default ciphersuite list) OpenSSL 1.1.1 does not have SSLv2 support and therefore is not vulnerable to this issue. The underlying error is in the implementation of the RSA_padding_check_SSLv23() function. This also affects the RSA_SSLV23_PADDING padding mode used by various other functions. Although 1.1.1 does not support SSLv2 the RSA_padding_check_SSLv23() function still exists, as does the RSA_SSLV23_PADDING padding mode. Applications that directly call that function or use that padding mode will encounter this issue. However since there is no support for the SSLv2 protocol in 1.1.1 this is considered a bug and not a security issue in that version. OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.0.2y (Affected 1.0.2s-1.0.2x). OpenSSL There is a security level vulnerability in.Information may be tampered with. Pillow is a Python-based image processing library. \nThere is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: Red Hat Advanced Cluster Management for Kubernetes version 2.3\nAdvisory ID: RHSA-2021:3016-01\nProduct: Red Hat ACM\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:3016\nIssue date: 2021-08-05\nCVE Names: CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 \n CVE-2018-1000858 CVE-2019-2708 CVE-2019-9169 \n CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 \n CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 \n CVE-2019-20934 CVE-2019-25013 CVE-2020-1730 \n CVE-2020-8231 CVE-2020-8284 CVE-2020-8285 \n CVE-2020-8286 CVE-2020-8927 CVE-2020-11668 \n CVE-2020-13434 CVE-2020-15358 CVE-2020-27618 \n CVE-2020-28196 CVE-2020-28469 CVE-2020-28500 \n CVE-2020-28851 CVE-2020-28852 CVE-2020-29361 \n CVE-2020-29362 CVE-2020-29363 CVE-2021-3326 \n CVE-2021-3377 CVE-2021-3449 CVE-2021-3450 \n CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 \n CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3560 CVE-2021-20271 CVE-2021-20305 \n CVE-2021-21272 CVE-2021-21309 CVE-2021-21321 \n CVE-2021-21322 CVE-2021-23337 CVE-2021-23343 \n CVE-2021-23346 CVE-2021-23362 CVE-2021-23364 \n CVE-2021-23368 CVE-2021-23369 CVE-2021-23382 \n CVE-2021-23383 CVE-2021-23839 CVE-2021-23840 \n CVE-2021-23841 CVE-2021-25217 CVE-2021-27219 \n CVE-2021-27292 CVE-2021-27358 CVE-2021-28092 \n CVE-2021-28918 CVE-2021-29418 CVE-2021-29477 \n CVE-2021-29478 CVE-2021-29482 CVE-2021-32399 \n CVE-2021-33033 CVE-2021-33034 CVE-2021-33502 \n CVE-2021-33623 CVE-2021-33909 CVE-2021-33910 \n=====================================================================\n\n1. Summary:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 General\nAvailability release images, which fix several bugs and security issues. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE links in the References section. \n\n2. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nThis advisory contains the container images for Red Hat Advanced Cluster\nManagement for Kubernetes, which fix several bugs and security issues. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nSecurity:\n\n* fastify-reply-from: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21321)\n\n* fastify-http-proxy: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21322)\n\n* nodejs-netmask: improper input validation of octal input data\n(CVE-2021-28918)\n\n* redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)\n\n* redis: Integer overflow via COPY command for large intsets\n(CVE-2021-29478)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing\n- -u- extension (CVE-2020-28851)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)\n\n* oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)\n\n* redis: integer overflow when configurable limit for maximum supported\nbulk input size is too big on 32-bit platforms (CVE-2021-21309)\n\n* nodejs-lodash: command injection via template (CVE-2021-23337)\n\n* nodejs-hosted-git-info: Regular Expression denial of service via\nshortcutMatch in fromUrl() (CVE-2021-23362)\n\n* browserslist: parsing of invalid queries could result in Regular\nExpression Denial of Service (ReDoS) (CVE-2021-23364)\n\n* nodejs-postcss: Regular expression denial of service during source map\nparsing (CVE-2021-23368)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in\nlib/previous-map.js (CVE-2021-23382)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\n* nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n(CVE-2021-27292)\n\n* grafana: snapshot feature allow an unauthenticated remote attacker to\ntrigger a DoS via a remote API call (CVE-2021-27358)\n\n* nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)\n\n* nodejs-netmask: incorrectly parses an IP address that has octal integer\nwith invalid character (CVE-2021-29418)\n\n* ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n(CVE-2021-29482)\n\n* normalize-url: ReDoS for data URLs (CVE-2021-33502)\n\n* nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\n* html-parse-stringify: Regular Expression DoS (CVE-2021-23346)\n\n* openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)\n\nFor more details about the security issues, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npages listed in the References section. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html-single/install/index#installing\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-2708\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20934\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-8231\nhttps://access.redhat.com/security/cve/CVE-2020-8284\nhttps://access.redhat.com/security/cve/CVE-2020-8285\nhttps://access.redhat.com/security/cve/CVE-2020-8286\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-11668\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-28196\nhttps://access.redhat.com/security/cve/CVE-2020-28469\nhttps://access.redhat.com/security/cve/CVE-2020-28500\nhttps://access.redhat.com/security/cve/CVE-2020-28851\nhttps://access.redhat.com/security/cve/CVE-2020-28852\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3377\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3560\nhttps://access.redhat.com/security/cve/CVE-2021-20271\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21272\nhttps://access.redhat.com/security/cve/CVE-2021-21309\nhttps://access.redhat.com/security/cve/CVE-2021-21321\nhttps://access.redhat.com/security/cve/CVE-2021-21322\nhttps://access.redhat.com/security/cve/CVE-2021-23337\nhttps://access.redhat.com/security/cve/CVE-2021-23343\nhttps://access.redhat.com/security/cve/CVE-2021-23346\nhttps://access.redhat.com/security/cve/CVE-2021-23362\nhttps://access.redhat.com/security/cve/CVE-2021-23364\nhttps://access.redhat.com/security/cve/CVE-2021-23368\nhttps://access.redhat.com/security/cve/CVE-2021-23369\nhttps://access.redhat.com/security/cve/CVE-2021-23382\nhttps://access.redhat.com/security/cve/CVE-2021-23383\nhttps://access.redhat.com/security/cve/CVE-2021-23839\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/cve/CVE-2021-25217\nhttps://access.redhat.com/security/cve/CVE-2021-27219\nhttps://access.redhat.com/security/cve/CVE-2021-27292\nhttps://access.redhat.com/security/cve/CVE-2021-27358\nhttps://access.redhat.com/security/cve/CVE-2021-28092\nhttps://access.redhat.com/security/cve/CVE-2021-28918\nhttps://access.redhat.com/security/cve/CVE-2021-29418\nhttps://access.redhat.com/security/cve/CVE-2021-29477\nhttps://access.redhat.com/security/cve/CVE-2021-29478\nhttps://access.redhat.com/security/cve/CVE-2021-29482\nhttps://access.redhat.com/security/cve/CVE-2021-32399\nhttps://access.redhat.com/security/cve/CVE-2021-33033\nhttps://access.redhat.com/security/cve/CVE-2021-33034\nhttps://access.redhat.com/security/cve/CVE-2021-33502\nhttps://access.redhat.com/security/cve/CVE-2021-33623\nhttps://access.redhat.com/security/cve/CVE-2021-33909\nhttps://access.redhat.com/security/cve/CVE-2021-33910\nhttps://access.redhat.com/security/updates/classification/#important\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYQyKDNzjgjWX9erEAQhAWQ//fU2h/y+76CVkExXChhgJ779lC9Ec1f+X\n6yw1b2WCHcztbTwyRtZw90dvIA1rNIDBrd83jIwfzsXzxEfGcCTriOmotHKX44+4\nw6uPpmPSOBTsXB/yV/kvbPWpUKkahITC2uvjaInzO2zMmUQ2ntNGpvPu7BbFLmL1\noHMVIZaJ+zrPifwPhGqlp3rAkYe6uGobdvwtrOMXw8L5VnJor+35xLjos5k30IlC\n4lftpWm9cD4oozdb5hw4A0i8fyAvue4hzpmgPfUJ6bngux8wycYhPGiRJR1HX03T\nMSXsWNBtqXNcB7r/GGqen73rr/eyyqsqfJ7+l8Uu7ph5cjk04foZcMqg+rz/1xne\ngVPkWcUJT8j7BH2sO8qiMdfYNl3+xNqPI9MtPEI8K/eiwynwETZqsKnEGIyhcTcX\nxe08Io2jV3jlnpQO/SBcvpKyzcqhDOuNBH2ozhn7Ka68WIMk2OuWempQcyDlWizO\n1UbgoiMVb0hlP0APVpJKNtpfFCjBzFC24gWSAOPTep3vzA418Sn/moCJupM+3PPA\nQIzkGAt9f7sffI0JEg0JPEy0/aTmfsPm7XeR6DG+xF7o1nfy1SOcf+tcnPD0K+z8\n8fS0uUMB/wO2s5yQ1TctsYzL9S5HRwMtnq7qKwWq9ItYzdQB4pcmyK1WgJAHVAtf\nOmk9Hj44tdI=\n=X9lR\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. OpenSSL Security Advisory [16 February 2021]\n============================================\n\nNull pointer deref in X509_issuer_and_serial_hash() (CVE-2021-23841)\n====================================================================\n\nSeverity: Moderate\n\nThe OpenSSL public API function X509_issuer_and_serial_hash() attempts to\ncreate a unique hash value based on the issuer and serial number data contained\nwithin an X509 certificate. However it fails to correctly handle any errors\nthat may occur while parsing the issuer field (which might occur if the issuer\nfield is maliciously constructed). This may subsequently result in a NULL\npointer deref and a crash leading to a potential denial of service attack. \n\nThis issue was reported to OpenSSL on 15th December 2020 by Tavis Ormandy from\nGoogle. The fix was developed by Matt Caswell. \n\nIncorrect SSLv2 rollback protection (CVE-2021-23839)\n====================================================\n\nSeverity: Low\n\nOpenSSL 1.0.2 supports SSLv2. \n\nThis issue was reported to OpenSSL on 21st January 2021 by D. Katz and Joel\nLuellwitz from Trustwave. The fix was developed by Matt Caswell. \n\nInteger overflow in CipherUpdate (CVE-2021-23840)\n=================================================\n\nSeverity: Low\n\nCalls to EVP_CipherUpdate, EVP_EncryptUpdate and EVP_DecryptUpdate may overflow\nthe output length argument in some cases where the input length is close to the\nmaximum permissable length for an integer on the platform. In such cases the\nreturn value from the function call will be 1 (indicating success), but the\noutput length value will be negative. This could cause applications to behave\nincorrectly or crash. \n\nThis issue was reported to OpenSSL on 13th December 2020 by Paul Kehrer. The fix\nwas developed by Matt Caswell. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20210216.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n", "sources": [ { "db": "NVD", "id": "CVE-2021-23839" }, { "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "VULMON", "id": "CVE-2021-23839" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "169676" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-23839", "trust": 3.5 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.7 }, { "db": "PULSESECURE", "id": "SA44846", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 1.5 }, { "db": "JVN", "id": "JVNVU99475301", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU94508446", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2021-003872", "trust": 0.8 }, { "db": "CS-HELP", "id": "SB2021041363", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202104-975", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.0636", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2259.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1502", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2657", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021041501", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071618", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021092209", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202102-1230", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2021-23839", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169676", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-23839" }, { "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "169676" }, { "db": "NVD", "id": "CVE-2021-23839" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202102-1230" } ] }, "id": "VAR-202102-1490", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:21:40.055000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Oracle\u00a0Critical\u00a0Patch\u00a0Update\u00a0Advisory\u00a0-\u00a0April\u00a02021 Mitsubishi Electric Mitsubishi Electric Corporation", "trust": 0.8, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=30919ab80a478f2d81f2e9acdcca3fa4740cd547" }, { "title": "OpenSSL Fixes for encryption problem vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=142768" }, { "title": "IBM: Security Bulletin: Vulnerabilities in OpenSSL affect AIX (CVE-2021-23839, CVE-2021-23840, and CVE-2021-23841)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3d5f5025c65711c2d9489cd9fe502978" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-23839 log" }, { "title": "IBM: Security Bulletin: IBM MQ for HP NonStop Server is affected by OpenSSL vulnerabilities CVE-2021-23839, CVE-2021-23840 and CVE-2021-23841", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=9ff59b7038a3eb3a3ff198d62d8029d1" }, { "title": "IBM: Security Bulletin: Multiple OpenSSL Vulnerabilities Affect IBM Connect:Direct for HP NonStop", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=10390d4e672c305fd00ed46b83871274" }, { "title": "Amazon Linux 2: ALAS2-2021-1608", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1608" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "", "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2021-23839 " }, { "title": "CVE-2021-23839", "trust": 0.1, "url": "https://github.com/pwncast/cve-2021-23839 " }, { "title": "tekton-image-scan-trivy", "trust": 0.1, "url": "https://github.com/vinamra28/tekton-image-scan-trivy " }, { "title": "TASSL-1.1.1k", "trust": 0.1, "url": "https://github.com/jntass/tassl-1.1.1k " }, { "title": "", "trust": 0.1, "url": "https://github.com/scholarnishu/trivy-by-aquasecurity " }, { "title": "", "trust": 0.1, "url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc " }, { "title": "", "trust": 0.1, "url": "https://github.com/fredrkl/trivy-demo " } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-23839" }, { "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "db": "CNNVD", "id": "CNNVD-202102-1230" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-327", "trust": 1.0 }, { "problemtype": "Inappropriate cryptographic strength (CWE-326) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "db": "NVD", "id": "CVE-2021-23839" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://www.openssl.org/news/secadv/20210216.txt" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20210219-0009/" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpuapr2021.html" }, { "trust": 1.7, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44846" }, { "trust": 1.7, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23839" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=30919ab80a478f2d81f2e9acdcca3fa4740cd547" }, { "trust": 0.9, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu94508446/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu99475301/" }, { "trust": 0.7, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-aix-cve-2021-23839-cve-2021-23840-and-cve-2021-23841-2/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021041363" }, { "trust": 0.6, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=30919ab80a478f2d81f2e9acdcca3fa4740cd547" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-vulnerability-was-identified-and-remediated-in-the-ibm-maas360-cloud-extender-v2-103-000-051-and-modules/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-in-openssl-affect-ibm-tivoli-netcool-system-service-monitors-application-service-monitors/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-openssl-vulnerabilities-affect-ibm-connectdirect-for-hp-nonstop/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1502" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2657" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-websphere-mq-for-hp-nonstop-server-is-affected-by-multiple-openssl-vulnerabilities-cve-2021-23839-cve-2021-23840-and-cve-2021-23841/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-mq-for-hp-nonstop-server-is-affected-by-openssl-vulnerabilities-cve-2021-23839-cve-2021-23840-and-cve-2021-23841/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.0636" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021041501" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-sterling-connectexpress-for-unix-is-affected-by-multiple-vulnerabilities-in-openssl-2/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilites-affect-engineering-lifecycle-management-and-ibm-engineering-products/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021092209" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071618" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-affect-ibm-sdk-for-node-js-in-ibm-cloud-5/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerability-in-openssl-affects-ibm-rational-clearcase-cve-2020-1971-cve-2021-23839-cve-2021-23840-cve-2021-23841-cve-2021-23839-cve-2021-23840-cve-2021-23841/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-aix-cve-2021-23839-cve-2021-23840-and-cve-2021-23841/" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/openssl-1-0-2-read-write-access-via-sslv2-rollback-protection-bypass-34596" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-openssl-vulnerabilites-impacting-aspera-high-speed-transfer-server-aspera-high-speed-transfer-endpoint-aspera-desktop-client-4-0-and-earlier-cve-2021-23839-cve-2021-23840-cve/" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-ibm-integration-bus-and-ibm-app-connect-enterprise-v11-cve-2021-23839-cve-2021-23840/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-vulnerabilities-in-openssl-affect-ibm-integration-bus-and-ibm-app-connect-enterprise-v11-cve-2021-23839-cve-2021-23840-2/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-openssl-vulnerabilites-impacting-aspera-high-speed-transfer-server-aspera-high-speed-transfer-endpoint-aspera-desktop-client-4-0-and-earlier-cve-2021-23839-cve-2021-23840-cve-2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2259.2" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-security-vulnerabilities-fixed-in-openssl-as-shipped-with-ibm-security-verify-products/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/327.html" }, { "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://github.com/pwncast/cve-2021-23839" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.1, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.1, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.1, "url": "https://www.openssl.org/support/contracts.html" }, { "trust": 0.1, "url": "https://www.openssl.org/policies/secpolicy.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-23839" }, { "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "169676" }, { "db": "NVD", "id": "CVE-2021-23839" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202102-1230" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2021-23839" }, { "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "169676" }, { "db": "NVD", "id": "CVE-2021-23839" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202102-1230" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-02-16T00:00:00", "db": "VULMON", "id": "CVE-2021-23839" }, { "date": "2021-11-09T00:00:00", "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2021-02-16T12:12:12", "db": "PACKETSTORM", "id": "169676" }, { "date": "2021-02-16T17:15:13.190000", "db": "NVD", "id": "CVE-2021-23839" }, { "date": "2021-04-13T00:00:00", "db": "CNNVD", "id": "CNNVD-202104-975" }, { "date": "2021-02-16T00:00:00", "db": "CNNVD", "id": "CNNVD-202102-1230" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2021-23839" }, { "date": "2022-09-20T06:06:00", "db": "JVNDB", "id": "JVNDB-2021-003872" }, { "date": "2023-11-07T03:30:54.957000", "db": "NVD", "id": "CVE-2021-23839" }, { "date": "2021-04-14T00:00:00", "db": "CNNVD", "id": "CNNVD-202104-975" }, { "date": "2022-09-19T00:00:00", "db": "CNNVD", "id": "CNNVD-202102-1230" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202102-1230" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL\u00a0 Cryptographic strength vulnerabilities in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-003872" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202104-975" } ], "trust": 0.6 } }
var-202207-0588
Vulnerability from variot
The llhttp parser <v14.20.1, <v16.17.1 and <v18.9.1 in the http module in Node.js does not correctly handle multi-line Transfer-Encoding headers. This can lead to HTTP Request Smuggling (HRS). llhttp of llhttp For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update Advisory ID: RHSA-2022:6389-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2022:6389 Issue date: 2022-09-08 CVE Names: CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-33987 ==================================================================== 1. Summary:
An update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now available for Red Hat Software Collections.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64
- Description:
Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.
The following packages have been upgraded to a later upstream version: rh-nodejs14-nodejs (14.20.0).
Security Fix(es):
-
nodejs: DNS rebinding in --inspect via invalid IP addresses (CVE-2022-32212)
-
nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding (CVE-2022-32213)
-
nodejs: HTTP request smuggling due to improper delimiting of header fields (CVE-2022-32214)
-
nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding (CVE-2022-32215)
-
got: missing verification of requested URLs allows redirects to UNIX sockets (CVE-2022-33987)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets 2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses 2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding 2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields 2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding 2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]
- Package List:
Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):
Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm
noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm
ppc64le: rh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm
s390x: rh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm
x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm
Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):
Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm
noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm
x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-32212 https://access.redhat.com/security/cve/CVE-2022-32213 https://access.redhat.com/security/cve/CVE-2022-32214 https://access.redhat.com/security/cve/CVE-2022-32215 https://access.redhat.com/security/cve/CVE-2022-33987 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/ ODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm VScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ bAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF IPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq +62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM 4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M 3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91 BYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI nBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX bcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz hGdWoRKL34w\xcePC -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 9) - aarch64, noarch, ppc64le, s390x, x86_64
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq
Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548
Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.
For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.
We recommend that you upgrade your nodejs packages.
For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . ========================================================================== Ubuntu Security Notice USN-6491-1 November 21, 2023
nodejs vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS (Available with Ubuntu Pro)
Summary:
Several security issues were fixed in Node.js.
Software Description: - nodejs: An open-source, cross-platform JavaScript runtime environment.
Details:
Axel Chong discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. (CVE-2022-32212)
Zeyu Zhang discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-32213, CVE-2022-32214, CVE-2022-32215)
It was discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-35256)
It was discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-43548)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: libnode-dev 12.22.9~dfsg-1ubuntu3.2 libnode72 12.22.9~dfsg-1ubuntu3.2 nodejs 12.22.9~dfsg-1ubuntu3.2 nodejs-doc 12.22.9~dfsg-1ubuntu3.2
Ubuntu 20.04 LTS: libnode-dev 10.19.0~dfsg-3ubuntu1.3 libnode64 10.19.0~dfsg-3ubuntu1.3 nodejs 10.19.0~dfsg-3ubuntu1.3 nodejs-doc 10.19.0~dfsg-3ubuntu1.3
Ubuntu 18.04 LTS (Available with Ubuntu Pro): nodejs 8.10.0~dfsg-2ubuntu0.4+esm4 nodejs-dev 8.10.0~dfsg-2ubuntu0.4+esm4 nodejs-doc 8.10.0~dfsg-2ubuntu0.4+esm4
In general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29
https://security.gentoo.org/
Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29
Synopsis
Multiple vulnerabilities have been discovered in Node.js.
Background
Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.
Affected packages
Package Vulnerable Unaffected
net-libs/nodejs < 16.20.2 >= 16.20.2
Description
Multiple vulnerabilities have been discovered in Node.js. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Node.js 20 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"
All Node.js 18 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"
All Node.js 16 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"
References
[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202405-29
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0588", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "llhttp", "scope": "gte", "trust": 1.0, "vendor": "llhttp", "version": "18.0.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.15.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "16.12.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "37" }, { "model": "llhttp", "scope": "lt", "trust": 1.0, "vendor": "llhttp", "version": "14.20.1" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.20.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "llhttp", "scope": "lt", "trust": 1.0, "vendor": "llhttp", "version": "16.17.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.0.0" }, { "model": "llhttp", "scope": "gte", "trust": 1.0, "vendor": "llhttp", "version": "14.0.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "14.14.0" }, { "model": "llhttp", "scope": "gte", "trust": 1.0, "vendor": "llhttp", "version": "16.0.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "16.16.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.13.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "18.5.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "18.0.0" }, { "model": "llhttp", "scope": "lt", "trust": 1.0, "vendor": "llhttp", "version": "18.9.1" }, { "model": "management center", "scope": "lt", "trust": 1.0, "vendor": "stormshield", "version": "3.3.2" }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "management center", "scope": null, "trust": 0.8, "vendor": "stormshield", "version": null }, { "model": "node.js", "scope": null, "trust": 0.8, "vendor": "node js", "version": null }, { "model": "llhttp", "scope": null, "trust": 0.8, "vendor": "llhttp", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013243" }, { "db": "NVD", "id": "CVE-2022-32215" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "18.5.0", "versionStartIncluding": "18.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "14.20.0", "versionStartIncluding": "14.15.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "16.16.0", "versionStartIncluding": "16.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "14.14.0", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "16.12.0", "versionStartIncluding": "16.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:llhttp:llhttp:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "14.20.1", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:llhttp:llhttp:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "16.17.1", "versionStartIncluding": "16.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:llhttp:llhttp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "18.9.1", "versionStartIncluding": "18.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:37:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:stormshield:stormshield_management_center:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.3.2", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-32215" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" } ], "trust": 0.5 }, "cve": "CVE-2022-32215", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "LOW", "exploitabilityScore": 3.9, "impactScore": 2.5, "integrityImpact": "LOW", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 6.5, "baseSeverity": "Medium", "confidentialityImpact": "Low", "exploitabilityScore": null, "id": "CVE-2022-32215", "impactScore": null, "integrityImpact": "Low", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-32215", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202207-678", "trust": 0.6, "value": "MEDIUM" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013243" }, { "db": "CNNVD", "id": "CNNVD-202207-678" }, { "db": "NVD", "id": "CVE-2022-32215" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The llhttp parser \u003cv14.20.1, \u003cv16.17.1 and \u003cv18.9.1 in the http module in Node.js does not correctly handle multi-line Transfer-Encoding headers. This can lead to HTTP Request Smuggling (HRS). llhttp of llhttp For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update\nAdvisory ID: RHSA-2022:6389-01\nProduct: Red Hat Software Collections\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6389\nIssue date: 2022-09-08\nCVE Names: CVE-2022-32212 CVE-2022-32213 CVE-2022-32214\n CVE-2022-32215 CVE-2022-33987\n====================================================================\n1. Summary:\n\nAn update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now\navailable for Red Hat Software Collections. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nrh-nodejs14-nodejs (14.20.0). \n\nSecurity Fix(es):\n\n* nodejs: DNS rebinding in --inspect via invalid IP addresses\n(CVE-2022-32212)\n\n* nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n(CVE-2022-32213)\n\n* nodejs: HTTP request smuggling due to improper delimiting of header\nfields (CVE-2022-32214)\n\n* nodejs: HTTP request smuggling due to incorrect parsing of multi-line\nTransfer-Encoding (CVE-2022-32215)\n\n* got: missing verification of requested URLs allows redirects to UNIX\nsockets (CVE-2022-33987)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets\n2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses\n2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding\n2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields\n2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]\n\n6. Package List:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nppc64le:\nrh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm\n\ns390x:\nrh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32212\nhttps://access.redhat.com/security/cve/CVE-2022-32213\nhttps://access.redhat.com/security/cve/CVE-2022-32214\nhttps://access.redhat.com/security/cve/CVE-2022-32215\nhttps://access.redhat.com/security/cve/CVE-2022-33987\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/\nODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm\nVScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ\nbAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF\nIPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq\n+62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM\n4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M\n3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91\nBYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI\nnBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX\nbcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz\nhGdWoRKL34w\\xcePC\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1 security@debian.org\nhttps://www.debian.org/security/ Aron Xu\nJanuary 24, 2023 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : nodejs\nCVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. ==========================================================================\nUbuntu Security Notice USN-6491-1\nNovember 21, 2023\n\nnodejs vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS (Available with Ubuntu Pro)\n\nSummary:\n\nSeveral security issues were fixed in Node.js. \n\nSoftware Description:\n- nodejs: An open-source, cross-platform JavaScript runtime environment. \n\nDetails:\n\nAxel Chong discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to execute\narbitrary code. (CVE-2022-32212)\n\nZeyu Zhang discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to execute\narbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-32213,\nCVE-2022-32214, CVE-2022-32215)\n\nIt was discovered that Node.js incorrectly handled certain inputs. If a user\nor an automated system were tricked into opening a specially crafted input\nfile, a remote attacker could possibly use this issue to execute arbitrary\ncode. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-35256)\n\nIt was discovered that Node.js incorrectly handled certain inputs. If a user\nor an automated system were tricked into opening a specially crafted input\nfile, a remote attacker could possibly use this issue to execute arbitrary\ncode. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-43548)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n libnode-dev 12.22.9~dfsg-1ubuntu3.2\n libnode72 12.22.9~dfsg-1ubuntu3.2\n nodejs 12.22.9~dfsg-1ubuntu3.2\n nodejs-doc 12.22.9~dfsg-1ubuntu3.2\n\nUbuntu 20.04 LTS:\n libnode-dev 10.19.0~dfsg-3ubuntu1.3\n libnode64 10.19.0~dfsg-3ubuntu1.3\n nodejs 10.19.0~dfsg-3ubuntu1.3\n nodejs-doc 10.19.0~dfsg-3ubuntu1.3\n\nUbuntu 18.04 LTS (Available with Ubuntu Pro):\n nodejs 8.10.0~dfsg-2ubuntu0.4+esm4\n nodejs-dev 8.10.0~dfsg-2ubuntu0.4+esm4\n nodejs-doc 8.10.0~dfsg-2ubuntu0.4+esm4\n\nIn general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: Node.js: Multiple Vulnerabilities\n Date: May 08, 2024\n Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. \n\nAffected packages\n================\nPackage Vulnerable Unaffected\n--------------- ------------ ------------\nnet-libs/nodejs \u003c 16.20.2 \u003e= 16.20.2\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in Node.js. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2022-32215" }, { "db": "JVNDB", "id": "JVNDB-2022-013243" }, { "db": "VULMON", "id": "CVE-2022-32215" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "175817" }, { "db": "PACKETSTORM", "id": "178512" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-32215", "trust": 4.1 }, { "db": "HACKERONE", "id": "1501679", "trust": 2.4 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 2.4 }, { "db": "ICS CERT", "id": "ICSA-23-017-03", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU90782730", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-013243", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "168305", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169410", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168442", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168358", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "170727", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2022.3673", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3488", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3505", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3487", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4136", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4101", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3586", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4681", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071827", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071338", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072639", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072522", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071612", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202207-678", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-32215", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168359", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "175817", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "178512", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-32215" }, { "db": "JVNDB", "id": "JVNDB-2022-013243" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "175817" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-678" }, { "db": "NVD", "id": "CVE-2022-32215" } ] }, "id": "VAR-202207-0588", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-07-23T20:25:16.794000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-444", "trust": 1.0 }, { "problemtype": "HTTP Request Smuggling (CWE-444) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013243" }, { "db": "NVD", "id": "CVE-2022-32215" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.5, "url": "https://nodejs.org/en/blog/vulnerability/july-2022-security-releases/" }, { "trust": 2.4, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 2.4, "url": "https://hackerone.com/reports/1501679" }, { "trust": 2.4, "url": "https://www.debian.org/security/2023/dsa-5326" }, { "trust": 1.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215" }, { "trust": 1.4, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/" }, { "trust": 1.4, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/" }, { "trust": 1.4, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/" }, { "trust": 1.1, "url": "https://access.redhat.com/security/cve/cve-2022-32215" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90782730/" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213" }, { "trust": 0.6, "url": "https://security.netapp.com/advisory/ntap-20220915-0001/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3505" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168305/red-hat-security-advisory-2022-6389-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072522" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168442/red-hat-security-advisory-2022-6595-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168358/red-hat-security-advisory-2022-6449-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4681" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-32215/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072639" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4101" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3673" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4136" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3487" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071827" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3586" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3488" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071612" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169410/red-hat-security-advisory-2022-6985-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071338" }, { "trust": 0.5, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-32214" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-32213" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-32212" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-33987" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-33987" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3807" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6389" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6985" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29244" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7788" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29244" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7788" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6449" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6448" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/nodejs" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/nodejs/12.22.9~dfsg-1ubuntu3.2" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-6491-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/nodejs/10.19.0~dfsg-3ubuntu1.3" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959" }, { "trust": 0.1, "url": "https://security.gentoo.org/glsa/202405-29" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-32215" }, { "db": "JVNDB", "id": "JVNDB-2022-013243" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "175817" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-678" }, { "db": "NVD", "id": "CVE-2022-32215" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-32215" }, { "db": "JVNDB", "id": "JVNDB-2022-013243" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "175817" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-678" }, { "db": "NVD", "id": "CVE-2022-32215" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-06T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-013243" }, { "date": "2022-09-08T14:41:32", "db": "PACKETSTORM", "id": "168305" }, { "date": "2022-10-18T22:30:49", "db": "PACKETSTORM", "id": "169410" }, { "date": "2022-09-21T13:47:04", "db": "PACKETSTORM", "id": "168442" }, { "date": "2022-09-13T15:43:41", "db": "PACKETSTORM", "id": "168358" }, { "date": "2022-09-13T15:43:55", "db": "PACKETSTORM", "id": "168359" }, { "date": "2023-01-25T16:09:12", "db": "PACKETSTORM", "id": "170727" }, { "date": "2023-11-21T16:00:44", "db": "PACKETSTORM", "id": "175817" }, { "date": "2024-05-09T15:46:44", "db": "PACKETSTORM", "id": "178512" }, { "date": "2022-07-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202207-678" }, { "date": "2022-07-14T15:15:08.387000", "db": "NVD", "id": "CVE-2022-32215" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-06T08:23:00", "db": "JVNDB", "id": "JVNDB-2022-013243" }, { "date": "2023-02-01T00:00:00", "db": "CNNVD", "id": "CNNVD-202207-678" }, { "date": "2023-11-07T03:47:46.577000", "db": "NVD", "id": "CVE-2022-32215" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "175817" }, { "db": "CNNVD", "id": "CNNVD-202207-678" } ], "trust": 0.7 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "llhttp\u00a0 of \u00a0llhttp\u00a0 in products from other multiple vendors \u00a0HTTP\u00a0 Request Smuggling Vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013243" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "environmental issue", "sources": [ { "db": "CNNVD", "id": "CNNVD-202207-678" } ], "trust": 0.6 } }
var-202312-0205
Vulnerability from variot
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). The REST API of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the API. The server will automatically restart.
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0205", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48430" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2_update_1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48430" } ] }, "cve": "CVE-2023-48430", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "LOW", "baseScore": 2.7, "baseSeverity": "LOW", "confidentialityImpact": "NONE", "exploitabilityScore": 1.2, "impactScore": 1.4, "integrityImpact": "NONE", "privilegesRequired": "HIGH", "scope": "UNCHANGED", "trust": 2.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:L", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2023-48430", "trust": 1.0, "value": "LOW" }, { "author": "productcert@siemens.com", "id": "CVE-2023-48430", "trust": 1.0, "value": "LOW" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48430" }, { "db": "NVD", "id": "CVE-2023-48430" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The REST API of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the API. The server will automatically restart.", "sources": [ { "db": "NVD", "id": "CVE-2023-48430" } ], "trust": 1.0 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "SIEMENS", "id": "SSA-077170", "trust": 1.0 }, { "db": "NVD", "id": "CVE-2023-48430", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48430" } ] }, "id": "VAR-202312-0205", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:04:09.394000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "NVD-CWE-noinfo", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48430" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.0, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48430" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "NVD", "id": "CVE-2023-48430" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-12-12T12:15:15.433000", "db": "NVD", "id": "CVE-2023-48430" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-12-14T19:37:28.207000", "db": "NVD", "id": "CVE-2023-48430" } ] } }
var-202011-0840
Vulnerability from variot
Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202011-0840", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "axios", "scope": "gte", "trust": 1.0, "vendor": "axios", "version": "0.19.0" }, { "model": "axios", "scope": "lte", "trust": 1.0, "vendor": "axios", "version": "0.21.0" }, { "model": "axios", "scope": "eq", "trust": 0.8, "vendor": "axios", "version": "0.21.0" }, { "model": "axios", "scope": "eq", "trust": 0.8, "vendor": "axios", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "db": "NVD", "id": "CVE-2020-28168" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:axios:axios:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndIncluding": "0.21.0", "versionStartIncluding": "0.19.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-28168" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Siemens reported these vulnerabilities to CISA.", "sources": [ { "db": "CNNVD", "id": "CNNVD-202011-650" } ], "trust": 0.6 }, "cve": "CVE-2020-28168", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 4.3, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 4.3, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2020-28168", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 5.9, "baseSeverity": "Medium", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2020-28168", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-28168", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202011-650", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-28168", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-28168" }, { "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "db": "NVD", "id": "CVE-2020-28168" }, { "db": "CNNVD", "id": "CNNVD-202011-650" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Axios NPM package 0.21.0 contains a Server-Side Request Forgery (SSRF) vulnerability where an attacker is able to bypass a proxy by providing a URL that responds with a redirect to a restricted host or IP address", "sources": [ { "db": "NVD", "id": "CVE-2020-28168" }, { "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "db": "VULMON", "id": "CVE-2020-28168" } ], "trust": 1.71 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-28168", "trust": 3.3 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.6 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 1.4 }, { "db": "JVN", "id": "JVNVU99475301", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-013151", "trust": 0.8 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202011-650", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2020-28168", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-28168" }, { "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "db": "NVD", "id": "CVE-2020-28168" }, { "db": "CNNVD", "id": "CNNVD-202011-650" } ] }, "id": "VAR-202011-0840", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:03:12.627000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Requests\u00a0that\u00a0follow\u00a0a\u00a0redirect\u00a0are\u00a0not\u00a0passing\u00a0via\u00a0the\u00a0proxy\u00a0#3369", "trust": 0.8, "url": "https://github.com/axios/axios/issues/3369" }, { "title": "Axios Fixes for code issue vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=134944" }, { "title": "Debian CVElist Bug Report Logs: node-axios: CVE-2020-28168", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=073b117b4a58cf2da488286e32905713" }, { "title": "IBM: Security Bulletin: IBM App Connect Enterprise Certified Container may be vulnerable to a Server-Side Request Forgery vulnerability (CVE-2020-28168)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=40b72bb161d1b7da9de5abec310d3cb1" }, { "title": "Django-Voice-Converter-with-Yandex-Speech-kit", "trust": 0.1, "url": "https://github.com/art610/django-voice-converter-with-yandex-speech-kit " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-28168" }, { "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "db": "CNNVD", "id": "CNNVD-202011-650" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-918", "trust": 1.0 }, { "problemtype": "Server-side request forgery (CWE-918) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "db": "NVD", "id": "CVE-2020-28168" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.6, "url": "https://github.com/axios/axios/issues/3369" }, { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28168" }, { "trust": 1.4, "url": "https://lists.apache.org/thread.html/r25d53acd06f29244b8a103781b0339c5e7efee9099a4d52f0c230e4a@%3ccommits.druid.apache.org%3e" }, { "trust": 1.4, "url": "https://lists.apache.org/thread.html/r954d80fd18e9dafef6e813963eb7e08c228151c2b6268ecd63b35d1f@%3ccommits.druid.apache.org%3e" }, { "trust": 1.4, "url": "https://lists.apache.org/thread.html/rdfd2901b8b697a3f6e2c9c6ecc688fd90d7f881937affb5144d61d6e@%3ccommits.druid.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r25d53acd06f29244b8a103781b0339c5e7efee9099a4d52f0c230e4a%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/r954d80fd18e9dafef6e813963eb7e08c228151c2b6268ecd63b35d1f%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.0, "url": "https://lists.apache.org/thread.html/rdfd2901b8b697a3f6e2c9c6ecc688fd90d7f881937affb5144d61d6e%40%3ccommits.druid.apache.org%3e" }, { "trust": 0.8, "url": "http://jvn.jp/vu/jvnvu99475301/index.html" }, { "trust": 0.8, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-multiple-vulnerabilities-affect-ibm-cloud-pak-for-automation/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-app-connect-enterprise-certified-container-may-be-vulnerable-to-a-server-side-request-forgery-vulnerability-cve-2020-28168/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/node-js-axios-information-disclosure-via-server-side-request-forgery-34243" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "db": "NVD", "id": "CVE-2020-28168" }, { "db": "CNNVD", "id": "CNNVD-202011-650" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2020-28168" }, { "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "db": "NVD", "id": "CVE-2020-28168" }, { "db": "CNNVD", "id": "CNNVD-202011-650" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-11-06T00:00:00", "db": "VULMON", "id": "CVE-2020-28168" }, { "date": "2021-06-21T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "date": "2020-11-06T20:15:13.163000", "db": "NVD", "id": "CVE-2020-28168" }, { "date": "2020-11-06T00:00:00", "db": "CNNVD", "id": "CNNVD-202011-650" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-13T00:00:00", "db": "VULMON", "id": "CVE-2020-28168" }, { "date": "2022-09-20T05:40:00", "db": "JVNDB", "id": "JVNDB-2020-013151" }, { "date": "2023-11-07T03:21:07.600000", "db": "NVD", "id": "CVE-2020-28168" }, { "date": "2022-09-19T00:00:00", "db": "CNNVD", "id": "CNNVD-202011-650" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202011-650" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Axios\u00a0NPM\u00a0 Server-side request forgery vulnerability in package", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-013151" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code problem", "sources": [ { "db": "CNNVD", "id": "CNNVD-202011-650" } ], "trust": 0.6 } }
var-202207-0107
Vulnerability from variot
AES OCB mode for 32-bit x86 platforms using the AES-NI assembly optimised implementation will not encrypt the entirety of the data under some circumstances. This could reveal sixteen bytes of data that was preexisting in the memory that wasn't written. In the special case of "in place" encryption, sixteen bytes of the plaintext would be revealed. Since OpenSSL does not support OCB based cipher suites for TLS and DTLS, they are both unaffected. Fixed in OpenSSL 3.0.5 (Affected 3.0.0-3.0.4). Fixed in OpenSSL 1.1.1q (Affected 1.1.1-1.1.1p). The issue in CVE-2022-1292 did not find other places in the c_rehash
script where it possibly passed the file names of certificates being hashed to a command executed through the shell. Some operating systems distribute this script in a manner where it is automatically executed. On these operating systems, this flaw allows an malicious user to execute arbitrary commands with the privileges of the script. (CVE-2022-2097). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Gentoo Linux Security Advisory GLSA 202210-02
https://security.gentoo.org/
Severity: Normal Title: OpenSSL: Multiple Vulnerabilities Date: October 16, 2022 Bugs: #741570, #809980, #832339, #835343, #842489, #856592 ID: 202210-02
Synopsis
Multiple vulnerabilities have been discovered in OpenSSL, the worst of which could result in denial of service.
Background
OpenSSL is an Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) as well as a general purpose cryptography library.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 dev-libs/openssl < 1.1.1q >= 1.1.1q
Description
Multiple vulnerabilities have been discovered in OpenSSL. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All OpenSSL users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/openssl-1.1.1q"
References
[ 1 ] CVE-2020-1968 https://nvd.nist.gov/vuln/detail/CVE-2020-1968 [ 2 ] CVE-2021-3711 https://nvd.nist.gov/vuln/detail/CVE-2021-3711 [ 3 ] CVE-2021-3712 https://nvd.nist.gov/vuln/detail/CVE-2021-3712 [ 4 ] CVE-2021-4160 https://nvd.nist.gov/vuln/detail/CVE-2021-4160 [ 5 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 6 ] CVE-2022-1292 https://nvd.nist.gov/vuln/detail/CVE-2022-1292 [ 7 ] CVE-2022-1473 https://nvd.nist.gov/vuln/detail/CVE-2022-1473 [ 8 ] CVE-2022-2097 https://nvd.nist.gov/vuln/detail/CVE-2022-2097
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-02
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update Advisory ID: RHSA-2022:6156-01 Product: RHODF Advisory URL: https://access.redhat.com/errata/RHSA-2022:6156 Issue date: 2022-08-24 CVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528 CVE-2022-0235 CVE-2022-0536 CVE-2022-0670 CVE-2022-1292 CVE-2022-1586 CVE-2022-1650 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24675 CVE-2022-24771 CVE-2022-24772 CVE-2022-24773 CVE-2022-24785 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-29526 CVE-2022-29810 CVE-2022-29824 CVE-2022-31129 ==================================================================== 1. Summary:
Updated images that include numerous enhancements, security, and bug fixes are now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
-
eventsource: Exposure of Sensitive Information (CVE-2022-1650)
-
moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
-
nodejs-set-value: type confusion allows bypass of CVE-2019-10747 (CVE-2021-23440)
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
-
prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)
-
node-forge: Signature verification leniency in checking
digestAlgorithm
structure can lead to signature forgery (CVE-2022-24771) -
node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery (CVE-2022-24772)
-
node-forge: Signature verification leniency in checking
DigestInfo
structure (CVE-2022-24773) -
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)
-
golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
These updated images include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:
https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
All Red Hat OpenShift Data Foundation users are advised to upgrade to these updated images, which provide numerous bug fixes and enhancements.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. For details on how to apply this update, refer to: https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1937117 - Deletion of StorageCluster doesn't remove ceph toolbox pod
1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified
1973317 - libceph: read_partial_message and bad crc/signature errors
1996829 - Permissions assigned to ceph auth principals when using external storage are too broad
2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747
2027724 - Warning log for rook-ceph-toolbox in ocs-operator log
2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]
2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2056697 - odf-csi-addons-operator subscription failed while using custom catalog source
2058211 - Add validation for CIDR field in DRPolicy
2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced
2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10
2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE
2063691 - [GSS] [RFE] Add termination policy to s3 route
2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint
2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression
2066514 - OCS operator to install Ceph prometheus alerts instead of Rook
2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route
2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking digestAlgorithm
structure can lead to signature forgery
2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery
2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking DigestInfo
structure
2069314 - OCS external mode should allow specifying names for all Ceph auth principals
2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster.
2069812 - must-gather: rbd_vol_and_snap_info collection is broken
2069815 - must-gather: essential rbd mirror command outputs aren't collected
2070542 - After creating a new storage system it redirects to 404 error page instead of the "StorageSystems" page for OCP 4.11
2071494 - [DR] Applications are not getting deployed
2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty
2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled
2075426 - 4.10 must gather is not available after GA of 4.10
2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in "Progressing" state although all the openshift-storage pods are up and Running
2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost
2077242 - vg-manager missing permissions
2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode
2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar
2079866 - [DR] odf-multicluster-console is in CLBO state
2079873 - csi-nfsplugin pods are not coming up after successful patch request to update "ROOK_CSI_ENABLE_NFS": "true"'
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2081680 - Add the LVM Operator into the Storage category in OperatorHub
2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation
2082078 - OBC's not getting created on primary cluster when manageds3 set as "true" for mirrorPeer
2082497 - Do not filter out removable devices
2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)
2083441 - LVM operator should deploy the volumesnapshotclass resource
2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status
2083993 - Add missing pieces for storageclassclaim
2084041 - [Console Migration] Link-able storage system name directs to blank page
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided"
2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates
2084546 - [Console Migration] Provider details absent under backing store in UI
2084565 - [Console Migration] The creation of new backing store , directs to a blank page
2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information
2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred
2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace
2086557 - Thin pool in lvm operator doesn't use all disks
2086675 - [UI]No option to "add capacity" via the Installed Operators tab
2086982 - ODF 4.11 deployment is failing
2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm
2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown
2087107 - Set default storage class if none is set
2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message
2087675 - ocs-metrics-exporter pod crashes on odf v4.11
2087732 - [Console Migration] Events page missing under new namespace store
2087755 - [Console Migration] Bucket Class details page doesn't have the complete details in UI
2088359 - Send VG Metrics even if storage is being consumed from thinPool alone
2088380 - KMS using vault on standalone MCG cluster is not enabled
2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint
2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook
2089296 - [MS v2] Storage cluster in error phase and 'ocs-provider-qe' addon installation failed with ODF 4.10.2
2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts
2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9.
2089552 - [MS v2] Cannot create StorageClassClaim
2089567 - [Console Migration] Improve the styling of Various Components
2089786 - [Console Migration] "Attach to deployment" option is missing in kebab menu for Object Bucket Claims .
2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket.
2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed
2090278 - [LVMO] Some containers are missing resource requirements and limits
2090314 - [LVMO] CSV is missing some useful annotations
2090953 - [MCO] DRCluster created under default namespace
2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics
2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool.
2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference
2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external
2091894 - All backingstores in cluster spontaneously change their own secret
2091951 - [GSS] OCS pods are restarting due to liveness probe failure
2091998 - Volume Snapshots not work with external restricted mode
2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool
2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks
2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)
2092349 - Enable zeroing on the thin-pool during creation
2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase
2092400 - [MS v2] StorageClassClaim creation is failing with error "no StorageCluster found"
2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically
2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2094179 - MCO fails to create DRClusters when replication mode is synchronous
2094853 - [Console Migration] Description under storage class drop down in add capacity is missing .
2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2095155 - Use tool black
to format the python external script
2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster
2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time
2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page
2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened
2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False
2096937 - Storage - Data Foundation: i18n misses
2097216 - Collect StorageClassClaim details in must-gather
2097287 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2097305 - Add translations for ODF 4.11
2098121 - Managed ODF not getting detected
2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled
2099581 - StorageClassClaim with encryption gets into Failed state
2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project
2099646 - Block pool list page kebab action menu is showing empty options
2099660 - OCS dashbaords not appearing unless user clicks on "Overview" Tab
2099724 - S3 secret namespace on the managed cluster doesn't match with the namespace in the s3profile
2099965 - rbd: provide option to disable setting metadata on RBD images
2100326 - [ODF to ODF] Volume snapshot creation failed
2100352 - Make lvmo pod labels more uniform
2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary
2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install
2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection
2103818 - Restored snapshot don't have any content
2104833 - Need to update configmap for IBM storage odf operator GA
2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- References:
https://access.redhat.com/security/cve/CVE-2021-23440 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0536 https://access.redhat.com/security/cve/CVE-2022-0670 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1650 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24771 https://access.redhat.com/security/cve/CVE-2022-24772 https://access.redhat.com/security/cve/CVE-2022-24773 https://access.redhat.com/security/cve/CVE-2022-24785 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-29526 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy Xo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N b3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY sJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj gz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am 2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf mX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl CSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE hTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY 3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E 2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V ERCapzwcRuQ=gV+z -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security fix:
- CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
Bug fixes:
-
Remove 1.9.1 from Proxy Patch Documentation (BZ# 2076856)
-
RHACM 2.3.12 images (BZ# 2101411)
-
Bugs fixed (https://bugzilla.redhat.com/):
2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation 2101411 - RHACM 2.3.12 images 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- ========================================================================== Ubuntu Security Notice USN-5502-1 July 05, 2022
openssl vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
OpenSSL could be made to expose sensitive information over the network. A remote attacker could possibly use this issue to obtain sensitive information.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: libssl3 3.0.2-0ubuntu1.6
Ubuntu 21.10: libssl1.1 1.1.1l-1ubuntu1.6
Ubuntu 20.04 LTS: libssl1.1 1.1.1f-1ubuntu2.16
Ubuntu 18.04 LTS: libssl1.1 1.1.1-1ubuntu2.1~18.04.20
After a standard system update you need to reboot your computer to make all the necessary changes. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/
Security fixes:
-
CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
-
CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add
-
CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header
-
CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions
-
CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip
-
CVE-2022-30630 golang: io/fs: stack exhaustion in Glob
-
CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
-
CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob
-
CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
-
CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode
-
CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
Bug fixes:
-
assisted-service repo pin-latest.py script should allow custom tags to be pinned (BZ# 2065661)
-
assisted-service-build image is too big in size (BZ# 2066059)
-
assisted-service pin-latest.py script should exclude the postgres image (BZ# 2076901)
-
PXE artifacts need to be served via HTTP (BZ# 2078531)
-
Implementing new service-agent protocol on agent side (BZ# 2081281)
-
RHACM 2.6.0 images (BZ# 2090906)
-
Assisted service POD keeps crashing after a bare metal host is created (BZ# 2093503)
-
Assisted service triggers the worker nodes re-provisioning on the hub cluster when the converged flow is enabled (BZ# 2096106)
-
Fix assisted CI jobs that fail for cluster-info readiness (BZ# 2097696)
-
Nodes are required to have installation disks of at least 120GB instead of at minimum of 100GB (BZ# 2099277)
-
The pre-selected search keyword is not readable (BZ# 2107736)
-
The value of label expressions in the new placement for policy and policysets cannot be shown real-time from UI (BZ# 2111843)
-
Bugs fixed (https://bugzilla.redhat.com/):
2065661 - assisted-service repo pin-latest.py script should allow custom tags to be pinned 2066059 - assisted-service-build image is too big in size 2076901 - assisted-service pin-latest.py script should exclude the postgres image 2078531 - iPXE artifacts need to be served via HTTP 2081281 - Implementing new service-agent protocol on agent side 2090901 - Capital letters in install-config.yaml .platform.baremetal.hosts[].name cause bootkube errors 2090906 - RHACM 2.6.0 images 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2093503 - Assisted service POD keeps crashing after a bare metal host is created 2096106 - Assisted service triggers the worker nodes re-provisioning on the hub cluster when the converged flow is enabled 2096445 - Assisted service POD keeps crashing after a bare metal host is created 2096460 - Spoke BMH stuck "inspecting" when deployed via the converged workflow 2097696 - Fix assisted CI jobs that fail for cluster-info readiness 2099277 - Nodes are required to have installation disks of at least 120GB instead of at minimum of 100GB 2103703 - Automatic version upgrade triggered for oadp operator installed by cluster-backup-chart 2104117 - Spoke BMH stuck ?available? after changing a BIOS attribute via the converged workflow 2104984 - Infrastructure operator missing clusterrole permissions for interacting with mutatingwebhookconfigurations 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2105339 - Search Application button on the Application Table for Subscription applications does not Redirect 2105357 - [UI] hypershift cluster creation error - n[0] is undefined 2106347 - Submariner error looking up service account submariner-operator/submariner-addon-sa 2106882 - Security Context Restrictions are restricting creation of some pods which affects the deployment of some applications 2107049 - The clusterrole for global clusterset did not created by default 2107065 - governance-policy-framework in CrashLoopBackOff state on spoke cluster: Failed to start manager {"error": "error listening on :8081: listen tcp :8081: bind: address already in use"} 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107370 - Helm Release resource recreation feature does not work with the local cluster 2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip 2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal 2108888 - Hypershift on AWS - control plane not running 2109370 - The button to create the cluster is not visible 2111203 - Add ocp 4.11 to filters for discovering clusters in ACM 2.6 2111218 - Create cluster - Infrastructure page crashes 2111651 - "View application" button on app table for Flux applications redirects to apiVersion=ocp instead of flux 2111663 - Hosted cluster in Pending import state 2111671 - Leaked namespaces after deleting hypershift deployment 2111770 - [ACM 2.6] there is no node info for remote cluster in multiple hubs 2111843 - The value of label expressions in the new placement for policy and policysets cannot be shown real-time from UI 2112180 - The policy page is crashed after input keywords in the search box 2112281 - config-policy-controller pod can't startup in the OCP3.11 managed cluster 2112318 - Can't delete the objects which are re-created by policy when deleting the policy 2112321 - BMAC reconcile loop never stops after changes 2112426 - No cluster discovered due to x509: certificate signed by unknown authority 2112478 - Value of delayAfterRunSeconds is not shown on the final submit panel and the word itself should not be wrapped. 2112793 - Can't view details of the policy template when set the spec.pruneObjectBehavior as unsupported value 2112803 - ClusterServiceVersion for release 2.6 branch references "latest" tag 2113787 - [ACM 2.6] can not delete namespaces after detaching the hosted cluster 2113838 - the cluster proxy-agent was deployed on the non-infra nodes 2113842 - [ACM 2.6] must restart hosting cluster registration pod if update work-manager-addon cr to change installNamespace 2114982 - Control plane type shows 'Standalone' for hypershift cluster 2115622 - Hub fromsecret function doesn't work for hosted mode in multiple hub 2115723 - Can't view details of the policy template for customer and hypershift cluster in hosted mode from UI 2115993 - Policy automation details panel was not updated after editing the mode back to disabled 2116211 - Count of violations with unknown status was not accurate when managed clusters have mixed status 2116329 - cluster-proxy-agent not startup due to the imagepullbackoff on spoke cluster 2117113 - The proxy-server-host was not correct in cluster-proxy-agent 2117187 - pruneObjectBehavior radio selection cannot work well and always switch the first one template in multiple configurationPolicy templates 2117480 - [ACM 2.6] infra-id of HypershiftDeployment doesn't work 2118338 - Report the "namespace not found" error after clicked view yaml link of a policy in the multiple hub env 2119326 - Can't view details of the SecurityContextConstraints policy for managed clusters from UI
- After the clusters are managed, you can use the APIs that are provided by the engine to distribute configuration based on placement policy. Description:
Openshift Logging Bug Fix Release (5.3.11)
Security Fix(es):
- golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Description:
Gatekeeper Operator v0.2
Gatekeeper is an open source project that applies the OPA Constraint Framework to enforce policies on your Kubernetes clusters. For support options for any other use, see the Gatekeeper open source project website at: https://open-policy-agent.github.io/gatekeeper/website/docs/howto/.
Security fix:
-
CVE-2022-30629: gatekeeper-container: golang: crypto/tls: session tickets lack random ticket_age_add
-
CVE-2022-1705: golang: net/http: improper sanitization of Transfer-Encoding header
-
CVE-2022-1962: golang: go/parser: stack exhaustion in all Parse* functions
-
CVE-2022-28131: golang: encoding/xml: stack exhaustion in Decoder.Skip
-
CVE-2022-30630: golang: io/fs: stack exhaustion in Glob
-
CVE-2022-30631: golang: compress/gzip: stack exhaustion in Reader.Read
-
CVE-2022-30632: golang: path/filepath: stack exhaustion in Glob
-
CVE-2022-30635: golang: encoding/gob: stack exhaustion in Decoder.Decode
-
CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
-
CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working
-
Solution:
The requirements to apply the upgraded images are different whether or not you used the operator. Complete the following steps, depending on your installation:
-
Upgrade gatekeeper operator: The gatekeeper operator that is installed by the gatekeeper operator policy has
installPlanApproval
set toAutomatic
. This setting means the operator will be upgraded automatically when there is a new version of the operator. No further action is required for upgrade. If you changed the setting forinstallPlanApproval
tomanual
, then you must view each cluster to manually approve the upgrade to the operator. -
Upgrade gatekeeper without the operator: The gatekeeper version is specified as part of the Gatekeeper CR in the gatekeeper operator policy. To upgrade the gatekeeper version: a) Determine the latest version of gatekeeper by visiting: https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. b) Click the tag dropdown, and find the latest static tag. An example tag is 'v3.3.0-1'. c) Edit the gatekeeper operator policy and update the image tag to use the latest static tag. For example, you might change this line to image: 'registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1'.
Refer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/ for additional information. Bugs fixed (https://bugzilla.redhat.com/):
2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read 2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob 2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header 2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions 2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working 2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob 2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode 2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip 2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal
Additional details can be found in the upstream advisories at https://www.openssl.org/news/secadv/20220705.txt and https://www.openssl.org/news/secadv/20230207.txt
For the stable distribution (bullseye), these problems have been fixed in version 1.1.1n-0+deb11u4.
We recommend that you upgrade your openssl packages.
For the detailed security status of openssl please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openssl
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmPivONfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0RBCA/+IqJ9qtjytulO41yPphASSEu22XVN9EYAUsdcpsTmnDtp1zUQSZpQv5qk 464Z2+0SkNtiHm5O5z5fs4LX0wXYBvLYrFnh2X2Z6rT+YFhXg8ZdEo+IysYSV7gB utbb1zbSqUSSLmlF/r6SnXy+HlTyB56p+k0MnLNHejes6DoghebZJGU6Dl5D8Z2J wOB6xi2sS3zVl1O+8//PPk5Sha8ESShuP/sBby01Xvpl65+8Icn7dXXHFNUn27rZ WdQCdxJaUJiqjZYzI5XAB+zHl8KNDiWP9MqIeT3g+YQ+nzSTeHxRPXDTDvClMv9y CJ90PaCY1DBNh5NrE2/IZkpIOKvTjRX3+db7Nab2GyRzLCP7p+1Bm14zHiKRHPOR t/6yX11diIF2zvlP/7qeCGkutv9KrFjSW81o1GgJMdt8uduHa95IgKNNUsA6Wf3O SkUP4EYfhXs2+TIfEenvqLuAmLsQBCRCvNDdmEGhtR4r0hpvcJ4eOaDBE6FWih1J i0mpDIjBYOV2iEUe85XfYflrcFfaxSwbl4ultH3Q3eWtiMwLgXqJ9dKRQEXJX7hp 48zKPwnftJbGBri9Y293sMjcpv3F/PTjXMh8LcUSVDkVVdQ8cLSmdmP4v4wSzV/q Z7KATUs6YAod4ts5u3/zD97Mzk0Xiecw/ggevbCfCvQTByk02Fg=lXE/ -----END PGP SIGNATURE-----
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0107", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1q" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "3.0.5" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "3.0.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "clustered data ontap antivirus connector", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2097" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1q", "versionStartIncluding": "1.1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0.5", "versionStartIncluding": "3.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap_antivirus_connector:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2097" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168213" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168287" }, { "db": "PACKETSTORM", "id": "168347" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "168280" } ], "trust": 0.7 }, "cve": "CVE-2022-2097", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.0, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "NONE", "baseScore": 5.0, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "id": "CVE-2022-2097", "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "MEDIUM", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.3, "baseSeverity": "MEDIUM", "confidentialityImpact": "LOW", "exploitabilityScore": 3.9, "impactScore": 1.4, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-2097", "trust": 1.0, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2022-2097", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2097" }, { "db": "NVD", "id": "CVE-2022-2097" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "AES OCB mode for 32-bit x86 platforms using the AES-NI assembly optimised implementation will not encrypt the entirety of the data under some circumstances. This could reveal sixteen bytes of data that was preexisting in the memory that wasn\u0027t written. In the special case of \"in place\" encryption, sixteen bytes of the plaintext would be revealed. Since OpenSSL does not support OCB based cipher suites for TLS and DTLS, they are both unaffected. Fixed in OpenSSL 3.0.5 (Affected 3.0.0-3.0.4). Fixed in OpenSSL 1.1.1q (Affected 1.1.1-1.1.1p). The issue in CVE-2022-1292 did not find other places in the `c_rehash` script where it possibly passed the file names of certificates being hashed to a command executed through the shell. Some operating systems distribute this script in a manner where it is automatically executed. On these operating systems, this flaw allows an malicious user to execute arbitrary commands with the privileges of the script. (CVE-2022-2097). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-02\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: OpenSSL: Multiple Vulnerabilities\n Date: October 16, 2022\n Bugs: #741570, #809980, #832339, #835343, #842489, #856592\n ID: 202210-02\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in OpenSSL, the worst of\nwhich could result in denial of service. \n\nBackground\n==========\n\nOpenSSL is an Open Source toolkit implementing the Secure Sockets Layer\n(SSL v2/v3) and Transport Layer Security (TLS v1) as well as a general\npurpose cryptography library. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 dev-libs/openssl \u003c 1.1.1q \u003e= 1.1.1q\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in OpenSSL. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll OpenSSL users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=dev-libs/openssl-1.1.1q\"\n\nReferences\n==========\n\n[ 1 ] CVE-2020-1968\n https://nvd.nist.gov/vuln/detail/CVE-2020-1968\n[ 2 ] CVE-2021-3711\n https://nvd.nist.gov/vuln/detail/CVE-2021-3711\n[ 3 ] CVE-2021-3712\n https://nvd.nist.gov/vuln/detail/CVE-2021-3712\n[ 4 ] CVE-2021-4160\n https://nvd.nist.gov/vuln/detail/CVE-2021-4160\n[ 5 ] CVE-2022-0778\n https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 6 ] CVE-2022-1292\n https://nvd.nist.gov/vuln/detail/CVE-2022-1292\n[ 7 ] CVE-2022-1473\n https://nvd.nist.gov/vuln/detail/CVE-2022-1473\n[ 8 ] CVE-2022-2097\n https://nvd.nist.gov/vuln/detail/CVE-2022-2097\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-02\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update\nAdvisory ID: RHSA-2022:6156-01\nProduct: RHODF\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6156\nIssue date: 2022-08-24\nCVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528\n CVE-2022-0235 CVE-2022-0536 CVE-2022-0670\n CVE-2022-1292 CVE-2022-1586 CVE-2022-1650\n CVE-2022-1785 CVE-2022-1897 CVE-2022-1927\n CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n CVE-2022-23806 CVE-2022-24675 CVE-2022-24771\n CVE-2022-24772 CVE-2022-24773 CVE-2022-24785\n CVE-2022-24921 CVE-2022-25313 CVE-2022-25314\n CVE-2022-27774 CVE-2022-27776 CVE-2022-27782\n CVE-2022-28327 CVE-2022-29526 CVE-2022-29810\n CVE-2022-29824 CVE-2022-31129\n====================================================================\n1. Summary:\n\nUpdated images that include numerous enhancements, security, and bug fixes\nare now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat\nEnterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Data Foundation is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. Red Hat\nOpenShift Data Foundation is a highly scalable, production-grade persistent\nstorage for stateful applications running in the Red Hat OpenShift\nContainer Platform. In addition to persistent storage, Red Hat OpenShift\nData Foundation provisions a multicloud data management service with an S3\ncompatible API. \n\nSecurity Fix(es):\n\n* eventsource: Exposure of Sensitive Information (CVE-2022-1650)\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n\n* nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n(CVE-2021-23440)\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n\n* node-forge: Signature verification leniency in checking `digestAlgorithm`\nstructure can lead to signature forgery (CVE-2022-24771)\n\n* node-forge: Signature verification failing to check tailing garbage bytes\ncan lead to signature forgery (CVE-2022-24772)\n\n* node-forge: Signature verification leniency in checking `DigestInfo`\nstructure (CVE-2022-24773)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\nThese updated images include numerous enhancements and bug fixes. Space\nprecludes documenting all of these changes in this advisory. Users are\ndirected to the Red Hat OpenShift Data Foundation Release Notes for\ninformation on the most significant of these changes:\n\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\nAll Red Hat OpenShift Data Foundation users are advised to upgrade to these\nupdated images, which provide numerous bug fixes and enhancements. \n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. For details on how to apply this\nupdate, refer to: https://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937117 - Deletion of StorageCluster doesn\u0027t remove ceph toolbox pod\n1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified\n1973317 - libceph: read_partial_message and bad crc/signature errors\n1996829 - Permissions assigned to ceph auth principals when using external storage are too broad\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2027724 - Warning log for rook-ceph-toolbox in ocs-operator log\n2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056697 - odf-csi-addons-operator subscription failed while using custom catalog source\n2058211 - Add validation for CIDR field in DRPolicy\n2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced\n2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10\n2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE\n2063691 - [GSS] [RFE] Add termination policy to s3 route\n2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2066514 - OCS operator to install Ceph prometheus alerts instead of Rook\n2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route\n2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking `digestAlgorithm` structure can lead to signature forgery\n2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery\n2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking `DigestInfo` structure\n2069314 - OCS external mode should allow specifying names for all Ceph auth principals\n2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster. \n2069812 - must-gather: rbd_vol_and_snap_info collection is broken\n2069815 - must-gather: essential rbd mirror command outputs aren\u0027t collected\n2070542 - After creating a new storage system it redirects to 404 error page instead of the \"StorageSystems\" page for OCP 4.11\n2071494 - [DR] Applications are not getting deployed\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty\n2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled\n2075426 - 4.10 must gather is not available after GA of 4.10\n2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in \"Progressing\" state although all the openshift-storage pods are up and Running\n2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost\n2077242 - vg-manager missing permissions\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2079866 - [DR] odf-multicluster-console is in CLBO state\n2079873 - csi-nfsplugin pods are not coming up after successful patch request to update \"ROOK_CSI_ENABLE_NFS\": \"true\"\u0027\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2081680 - Add the LVM Operator into the Storage category in OperatorHub\n2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation\n2082078 - OBC\u0027s not getting created on primary cluster when manageds3 set as \"true\" for mirrorPeer\n2082497 - Do not filter out removable devices\n2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)\n2083441 - LVM operator should deploy the volumesnapshotclass resource\n2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status\n2083993 - Add missing pieces for storageclassclaim\n2084041 - [Console Migration] Link-able storage system name directs to blank page\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided\"\n2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates\n2084546 - [Console Migration] Provider details absent under backing store in UI\n2084565 - [Console Migration] The creation of new backing store , directs to a blank page\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred\n2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace\n2086557 - Thin pool in lvm operator doesn\u0027t use all disks\n2086675 - [UI]No option to \"add capacity\" via the Installed Operators tab\n2086982 - ODF 4.11 deployment is failing\n2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm\n2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2087107 - Set default storage class if none is set\n2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message\n2087675 - ocs-metrics-exporter pod crashes on odf v4.11\n2087732 - [Console Migration] Events page missing under new namespace store\n2087755 - [Console Migration] Bucket Class details page doesn\u0027t have the complete details in UI\n2088359 - Send VG Metrics even if storage is being consumed from thinPool alone\n2088380 - KMS using vault on standalone MCG cluster is not enabled\n2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint\n2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook\n2089296 - [MS v2] Storage cluster in error phase and \u0027ocs-provider-qe\u0027 addon installation failed with ODF 4.10.2\n2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts\n2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9. \n2089552 - [MS v2] Cannot create StorageClassClaim\n2089567 - [Console Migration] Improve the styling of Various Components\n2089786 - [Console Migration] \"Attach to deployment\" option is missing in kebab menu for Object Bucket Claims . \n2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket. \n2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed\n2090278 - [LVMO] Some containers are missing resource requirements and limits\n2090314 - [LVMO] CSV is missing some useful annotations\n2090953 - [MCO] DRCluster created under default namespace\n2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics\n2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool. \n2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference\n2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external\n2091894 - All backingstores in cluster spontaneously change their own secret\n2091951 - [GSS] OCS pods are restarting due to liveness probe failure\n2091998 - Volume Snapshots not work with external restricted mode\n2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool\n2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks\n2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)\n2092349 - Enable zeroing on the thin-pool during creation\n2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase\n2092400 - [MS v2] StorageClassClaim creation is failing with error \"no StorageCluster found\"\n2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically\n2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2094179 - MCO fails to create DRClusters when replication mode is synchronous\n2094853 - [Console Migration] Description under storage class drop down in add capacity is missing . \n2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2095155 - Use tool `black` to format the python external script\n2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster\n2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time\n2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page\n2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened\n2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False\n2096937 - Storage - Data Foundation: i18n misses\n2097216 - Collect StorageClassClaim details in must-gather\n2097287 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2097305 - Add translations for ODF 4.11\n2098121 - Managed ODF not getting detected\n2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled\n2099581 - StorageClassClaim with encryption gets into Failed state\n2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project\n2099646 - Block pool list page kebab action menu is showing empty options\n2099660 - OCS dashbaords not appearing unless user clicks on \"Overview\" Tab\n2099724 - S3 secret namespace on the managed cluster doesn\u0027t match with the namespace in the s3profile\n2099965 - rbd: provide option to disable setting metadata on RBD images\n2100326 - [ODF to ODF] Volume snapshot creation failed\n2100352 - Make lvmo pod labels more uniform\n2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary\n2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install\n2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection\n2103818 - Restored snapshot don\u0027t have any content\n2104833 - Need to update configmap for IBM storage odf operator GA\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23440\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0536\nhttps://access.redhat.com/security/cve/CVE-2022-0670\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1650\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24771\nhttps://access.redhat.com/security/cve/CVE-2022-24772\nhttps://access.redhat.com/security/cve/CVE-2022-24773\nhttps://access.redhat.com/security/cve/CVE-2022-24785\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-29526\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy\nXo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N\nb3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY\nsJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj\ngz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am\n2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf\nmX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl\nCSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE\nhTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY\n3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E\n2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V\nERCapzwcRuQ=gV+z\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity fix:\n\n* CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\nBug fixes:\n\n* Remove 1.9.1 from Proxy Patch Documentation (BZ# 2076856)\n\n* RHACM 2.3.12 images (BZ# 2101411)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2076856 - [doc] Remove 1.9.1 from Proxy Patch Documentation\n2101411 - RHACM 2.3.12 images\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. ==========================================================================\nUbuntu Security Notice USN-5502-1\nJuly 05, 2022\n\nopenssl vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nOpenSSL could be made to expose sensitive information over the network. A remote attacker could possibly use this issue to obtain\nsensitive information. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n libssl3 3.0.2-0ubuntu1.6\n\nUbuntu 21.10:\n libssl1.1 1.1.1l-1ubuntu1.6\n\nUbuntu 20.04 LTS:\n libssl1.1 1.1.1f-1ubuntu2.16\n\nUbuntu 18.04 LTS:\n libssl1.1 1.1.1-1ubuntu2.1~18.04.20\n\nAfter a standard system update you need to reboot your computer to make all\nthe necessary changes. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/\n\nSecurity fixes: \n\n* CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n* CVE-2022-30629 golang: crypto/tls: session tickets lack random\nticket_age_add\n\n* CVE-2022-1705 golang: net/http: improper sanitization of\nTransfer-Encoding header\n\n* CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n\n* CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n\n* CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n\n* CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n* CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n\n* CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n* CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n\n* CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy -\nomit X-Forwarded-For not working\n\nBug fixes:\n\n* assisted-service repo pin-latest.py script should allow custom tags to be\npinned (BZ# 2065661)\n\n* assisted-service-build image is too big in size (BZ# 2066059)\n\n* assisted-service pin-latest.py script should exclude the postgres image\n(BZ# 2076901)\n\n* PXE artifacts need to be served via HTTP (BZ# 2078531)\n\n* Implementing new service-agent protocol on agent side (BZ# 2081281)\n\n* RHACM 2.6.0 images (BZ# 2090906)\n\n* Assisted service POD keeps crashing after a bare metal host is created\n(BZ# 2093503)\n\n* Assisted service triggers the worker nodes re-provisioning on the hub\ncluster when the converged flow is enabled (BZ# 2096106)\n\n* Fix assisted CI jobs that fail for cluster-info readiness (BZ# 2097696)\n\n* Nodes are required to have installation disks of at least 120GB instead\nof at minimum of 100GB (BZ# 2099277)\n\n* The pre-selected search keyword is not readable (BZ# 2107736)\n\n* The value of label expressions in the new placement for policy and\npolicysets cannot be shown real-time from UI (BZ# 2111843)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2065661 - assisted-service repo pin-latest.py script should allow custom tags to be pinned\n2066059 - assisted-service-build image is too big in size\n2076901 - assisted-service pin-latest.py script should exclude the postgres image\n2078531 - iPXE artifacts need to be served via HTTP\n2081281 - Implementing new service-agent protocol on agent side\n2090901 - Capital letters in install-config.yaml .platform.baremetal.hosts[].name cause bootkube errors\n2090906 - RHACM 2.6.0 images\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2093503 - Assisted service POD keeps crashing after a bare metal host is created\n2096106 - Assisted service triggers the worker nodes re-provisioning on the hub cluster when the converged flow is enabled\n2096445 - Assisted service POD keeps crashing after a bare metal host is created\n2096460 - Spoke BMH stuck \"inspecting\" when deployed via the converged workflow\n2097696 - Fix assisted CI jobs that fail for cluster-info readiness\n2099277 - Nodes are required to have installation disks of at least 120GB instead of at minimum of 100GB\n2103703 - Automatic version upgrade triggered for oadp operator installed by cluster-backup-chart\n2104117 - Spoke BMH stuck ?available? after changing a BIOS attribute via the converged workflow\n2104984 - Infrastructure operator missing clusterrole permissions for interacting with mutatingwebhookconfigurations\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2105339 - Search Application button on the Application Table for Subscription applications does not Redirect\n2105357 - [UI] hypershift cluster creation error - n[0] is undefined\n2106347 - Submariner error looking up service account submariner-operator/submariner-addon-sa\n2106882 - Security Context Restrictions are restricting creation of some pods which affects the deployment of some applications\n2107049 - The clusterrole for global clusterset did not created by default\n2107065 - governance-policy-framework in CrashLoopBackOff state on spoke cluster: Failed to start manager {\"error\": \"error listening on :8081: listen tcp :8081: bind: address already in use\"}\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107370 - Helm Release resource recreation feature does not work with the local cluster\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n2108888 - Hypershift on AWS - control plane not running\n2109370 - The button to create the cluster is not visible\n2111203 - Add ocp 4.11 to filters for discovering clusters in ACM 2.6\n2111218 - Create cluster - Infrastructure page crashes\n2111651 - \"View application\" button on app table for Flux applications redirects to apiVersion=ocp instead of flux\n2111663 - Hosted cluster in Pending import state\n2111671 - Leaked namespaces after deleting hypershift deployment\n2111770 - [ACM 2.6] there is no node info for remote cluster in multiple hubs\n2111843 - The value of label expressions in the new placement for policy and policysets cannot be shown real-time from UI\n2112180 - The policy page is crashed after input keywords in the search box\n2112281 - config-policy-controller pod can\u0027t startup in the OCP3.11 managed cluster\n2112318 - Can\u0027t delete the objects which are re-created by policy when deleting the policy\n2112321 - BMAC reconcile loop never stops after changes\n2112426 - No cluster discovered due to x509: certificate signed by unknown authority\n2112478 - Value of delayAfterRunSeconds is not shown on the final submit panel and the word itself should not be wrapped. \n2112793 - Can\u0027t view details of the policy template when set the spec.pruneObjectBehavior as unsupported value\n2112803 - ClusterServiceVersion for release 2.6 branch references \"latest\" tag\n2113787 - [ACM 2.6] can not delete namespaces after detaching the hosted cluster\n2113838 - the cluster proxy-agent was deployed on the non-infra nodes\n2113842 - [ACM 2.6] must restart hosting cluster registration pod if update work-manager-addon cr to change installNamespace\n2114982 - Control plane type shows \u0027Standalone\u0027 for hypershift cluster\n2115622 - Hub fromsecret function doesn\u0027t work for hosted mode in multiple hub\n2115723 - Can\u0027t view details of the policy template for customer and hypershift cluster in hosted mode from UI\n2115993 - Policy automation details panel was not updated after editing the mode back to disabled\n2116211 - Count of violations with unknown status was not accurate when managed clusters have mixed status\n2116329 - cluster-proxy-agent not startup due to the imagepullbackoff on spoke cluster\n2117113 - The proxy-server-host was not correct in cluster-proxy-agent\n2117187 - pruneObjectBehavior radio selection cannot work well and always switch the first one template in multiple configurationPolicy templates\n2117480 - [ACM 2.6] infra-id of HypershiftDeployment doesn\u0027t work\n2118338 - Report the \"namespace not found\" error after clicked view yaml link of a policy in the multiple hub env\n2119326 - Can\u0027t view details of the SecurityContextConstraints policy for managed clusters from UI\n\n5. After the clusters are managed, you can use the APIs that\nare provided by the engine to distribute configuration based on placement\npolicy. Description:\n\nOpenshift Logging Bug Fix Release (5.3.11)\n\nSecurity Fix(es):\n\n* golang: compress/gzip: stack exhaustion in Reader.Read (CVE-2022-30631)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Description:\n\nGatekeeper Operator v0.2\n\nGatekeeper is an open source project that applies the OPA Constraint\nFramework to enforce policies on your Kubernetes clusters. For support options for any other use, see the Gatekeeper\nopen source project website at:\nhttps://open-policy-agent.github.io/gatekeeper/website/docs/howto/. \n\nSecurity fix:\n\n* CVE-2022-30629: gatekeeper-container: golang: crypto/tls: session tickets\nlack random ticket_age_add\n\n* CVE-2022-1705: golang: net/http: improper sanitization of\nTransfer-Encoding header\n\n* CVE-2022-1962: golang: go/parser: stack exhaustion in all Parse*\nfunctions\n\n* CVE-2022-28131: golang: encoding/xml: stack exhaustion in Decoder.Skip\n\n* CVE-2022-30630: golang: io/fs: stack exhaustion in Glob\n\n* CVE-2022-30631: golang: compress/gzip: stack exhaustion in Reader.Read\n\n* CVE-2022-30632: golang: path/filepath: stack exhaustion in Glob\n\n* CVE-2022-30635: golang: encoding/gob: stack exhaustion in Decoder.Decode\n\n* CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n* CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy -\nomit X-Forwarded-For not working\n\n3. Solution:\n\nThe requirements to apply the upgraded images are different whether or not\nyou\nused the operator. Complete the following steps, depending on your\ninstallation:\n\n* Upgrade gatekeeper operator:\nThe gatekeeper operator that is installed by the gatekeeper operator policy\nhas\n`installPlanApproval` set to `Automatic`. This setting means the operator\nwill\nbe upgraded automatically when there is a new version of the operator. No\nfurther action is required for upgrade. If you changed the setting for\n`installPlanApproval` to `manual`, then you must view each cluster to\nmanually\napprove the upgrade to the operator. \n\n* Upgrade gatekeeper without the operator:\nThe gatekeeper version is specified as part of the Gatekeeper CR in the\ngatekeeper operator policy. To upgrade the gatekeeper version:\na) Determine the latest version of gatekeeper by visiting:\nhttps://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9. \nb) Click the tag dropdown, and find the latest static tag. An example tag\nis\n\u0027v3.3.0-1\u0027. \nc) Edit the gatekeeper operator policy and update the image tag to use the\nlatest static tag. For example, you might change this line to image:\n\u0027registry.redhat.io/rhacm2/gatekeeper-rhel8:v3.3.0-1\u0027. \n\nRefer to https://open-policy-agent.github.io/gatekeeper/website/docs/howto/\nfor additional information. Bugs fixed (https://bugzilla.redhat.com/):\n\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n2107371 - CVE-2022-30630 golang: io/fs: stack exhaustion in Glob\n2107374 - CVE-2022-1705 golang: net/http: improper sanitization of Transfer-Encoding header\n2107376 - CVE-2022-1962 golang: go/parser: stack exhaustion in all Parse* functions\n2107383 - CVE-2022-32148 golang: net/http/httputil: NewSingleHostReverseProxy - omit X-Forwarded-For not working\n2107386 - CVE-2022-30632 golang: path/filepath: stack exhaustion in Glob\n2107388 - CVE-2022-30635 golang: encoding/gob: stack exhaustion in Decoder.Decode\n2107390 - CVE-2022-28131 golang: encoding/xml: stack exhaustion in Decoder.Skip\n2107392 - CVE-2022-30633 golang: encoding/xml: stack exhaustion in Unmarshal\n\n5. \n\nAdditional details can be found in the upstream advisories at\nhttps://www.openssl.org/news/secadv/20220705.txt and\nhttps://www.openssl.org/news/secadv/20230207.txt\n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 1.1.1n-0+deb11u4. \n\nWe recommend that you upgrade your openssl packages. \n\nFor the detailed security status of openssl please refer to its security\ntracker page at:\nhttps://security-tracker.debian.org/tracker/openssl\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmPivONfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0RBCA/+IqJ9qtjytulO41yPphASSEu22XVN9EYAUsdcpsTmnDtp1zUQSZpQv5qk\n464Z2+0SkNtiHm5O5z5fs4LX0wXYBvLYrFnh2X2Z6rT+YFhXg8ZdEo+IysYSV7gB\nutbb1zbSqUSSLmlF/r6SnXy+HlTyB56p+k0MnLNHejes6DoghebZJGU6Dl5D8Z2J\nwOB6xi2sS3zVl1O+8//PPk5Sha8ESShuP/sBby01Xvpl65+8Icn7dXXHFNUn27rZ\nWdQCdxJaUJiqjZYzI5XAB+zHl8KNDiWP9MqIeT3g+YQ+nzSTeHxRPXDTDvClMv9y\nCJ90PaCY1DBNh5NrE2/IZkpIOKvTjRX3+db7Nab2GyRzLCP7p+1Bm14zHiKRHPOR\nt/6yX11diIF2zvlP/7qeCGkutv9KrFjSW81o1GgJMdt8uduHa95IgKNNUsA6Wf3O\nSkUP4EYfhXs2+TIfEenvqLuAmLsQBCRCvNDdmEGhtR4r0hpvcJ4eOaDBE6FWih1J\ni0mpDIjBYOV2iEUe85XfYflrcFfaxSwbl4ultH3Q3eWtiMwLgXqJ9dKRQEXJX7hp\n48zKPwnftJbGBri9Y293sMjcpv3F/PTjXMh8LcUSVDkVVdQ8cLSmdmP4v4wSzV/q\nZ7KATUs6YAod4ts5u3/zD97Mzk0Xiecw/ggevbCfCvQTByk02Fg=lXE/\n-----END PGP SIGNATURE-----\n", "sources": [ { "db": "NVD", "id": "CVE-2022-2097" }, { "db": "VULMON", "id": "CVE-2022-2097" }, { "db": "PACKETSTORM", "id": "168714" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168213" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "167708" }, { "db": "PACKETSTORM", "id": "168287" }, { "db": "PACKETSTORM", "id": "168347" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "168280" }, { "db": "PACKETSTORM", "id": "170896" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-2097", "trust": 2.1 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 1.1 }, { "db": "ICS CERT", "id": "ICSA-23-017-03", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2022-2097", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168714", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168150", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168213", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168378", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167708", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168287", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168347", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168289", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168280", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170896", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2097" }, { "db": "PACKETSTORM", "id": "168714" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168213" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "167708" }, { "db": "PACKETSTORM", "id": "168287" }, { "db": "PACKETSTORM", "id": "168347" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "168280" }, { "db": "PACKETSTORM", "id": "170896" }, { "db": "NVD", "id": "CVE-2022-2097" } ] }, "id": "VAR-202207-0107", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-07-23T19:53:59.023000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Amazon Linux 2: ALAS2-2023-1974", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2023-1974" }, { "title": "Red Hat: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-2097" }, { "title": "Debian CVElist Bug Report Logs: openssl: CVE-2022-2097", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=740b837c53d462fc86f3cb0849b86ca0" }, { "title": "Red Hat: Moderate: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225818 - security advisory" }, { "title": "Red Hat: Moderate: openssl security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226224 - security advisory" }, { "title": "Debian Security Advisories: DSA-5343-1 openssl -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=b6a11b827fe9cfaea9c113b2ad37856f" }, { "title": "Red Hat: Important: Release of containers for OSP 16.2.z director operator tech preview", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226517 - security advisory" }, { "title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226184 - security advisory" }, { "title": "Amazon Linux 2022: ALAS2022-2022-147", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-147" }, { "title": "Red Hat: Critical: Multicluster Engine for Kubernetes 2.0.2 security and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226422 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226103 - security advisory" }, { "title": "Brocade Security Advisories: Access Denied", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=38e06d13217149784c0941a3098b8989" }, { "title": "Amazon Linux 2022: ALAS2022-2022-195", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-195" }, { "title": "Red Hat: Important: Node Maintenance Operator 4.11.1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226188 - security advisory" }, { "title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226182 - security advisory" }, { "title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226051 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226283 - security advisory" }, { "title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226183 - security advisory" }, { "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226507 - security advisory" }, { "title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227055 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227058 - security advisory" }, { "title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226024 - security advisory" }, { "title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226714 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.1.0 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226290 - security advisory" }, { "title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security and container updates", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226348 - security advisory" }, { "title": "Red Hat: Moderate: Multicluster Engine for Kubernetes 2.1 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226345 - security advisory" }, { "title": "Red Hat: Moderate: RHSA: Submariner 0.13 - security and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226346 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.0.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226430 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226370 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226271 - security advisory" }, { "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226696 - security advisory" }, { "title": "Hitachi Security Advisories: Multiple Vulnerabilities in Hitachi Command Suite, Hitachi Automation Director, Hitachi Configuration Manager, Hitachi Infrastructure Analytics Advisor and Hitachi Ops Center", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=hitachi_security_advisories\u0026qid=hitachi-sec-2023-126" }, { "title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226156 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228750 - security advisory" }, { "title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226526 - security advisory" }, { "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory" }, { "title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230408 - security advisory" }, { "title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228889 - security advisory" }, { "title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228781 - security advisory" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory" }, { "title": "https://github.com/jntass/TASSL-1.1.1", "trust": 0.1, "url": "https://github.com/jntass/tassl-1.1.1 " }, { "title": "BIF - The Fairwinds Base Image Finder Client", "trust": 0.1, "url": "https://github.com/fairwindsops/bif " }, { "title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories", "trust": 0.1, "url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories " }, { "title": "GitHub Actions CI App Pipeline", "trust": 0.1, "url": "https://github.com/isgo-golgo13/gokit-gorillakit-enginesvc " }, { "title": "https://github.com/cdupuis/image-api", "trust": 0.1, "url": "https://github.com/cdupuis/image-api " }, { "title": "OpenSSL-CVE-lib", "trust": 0.1, "url": "https://github.com/chnzzh/openssl-cve-lib " }, { "title": "PoC in GitHub", "trust": 0.1, "url": "https://github.com/nomi-sec/poc-in-github " }, { "title": "PoC in GitHub", "trust": 0.1, "url": "https://github.com/manas3c/cve-poc " } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2097" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-327", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2097" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.2, "url": "https://www.openssl.org/news/secadv/20220705.txt" }, { "trust": 1.2, "url": "https://security.gentoo.org/glsa/202210-02" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20220715-0011/" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 1.1, "url": "https://www.debian.org/security/2023/dsa-5343" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2023/02/msg00019.html" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20230420-0008/" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=a98f339ddd7e8f487d6e0088d4a9a42324885a93" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=919925673d6c9cfed3c1085497f5dfbbed5fc431" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/v6567jerrhhjw2gngjgkdrnhr7snpzk7/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/r6ck57nbqftpumxapjurcgxuyt76nqak/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097" }, { "trust": 1.0, "url": "https://security.netapp.com/advisory/ntap-20240621-0006/" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-32206" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-32208" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-2526" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-31129" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-29154" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-32250" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1012" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-30631" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#critical" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-36067" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-32148" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30630" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30635" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1705" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30629" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28131" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30633" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30632" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1962" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/327.html" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/al2/alas-2023-1974.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://github.com/fairwindsops/bif" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-23-017-03" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/al2022/alas-2022-195.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1968" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3711" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1473" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4160" }, { "trust": 0.1, "url": "https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29526" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24785" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0670" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1650" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6156" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0536" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26116" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1729" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21123" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21166" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21125" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1966" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3177" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26137" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1966" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26137" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3177" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6271" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6507" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl/1.1.1-1ubuntu2.1~18.04.20" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl/1.1.1f-1ubuntu2.16" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl/3.0.2-0ubuntu1.6" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/openssl/1.1.1l-1ubuntu1.6" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5502-1" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6370" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6422" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/multicluster_engine/index#installing-while-connected-online" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-36067" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6182" }, { "trust": 0.1, "url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/." }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6348" }, { "trust": 0.1, "url": "https://catalog.redhat.com/software/containers/rhacm2/gatekeeper-rhel8/5fadb4a18d9a79d2f438a5d9." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30632" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824" }, { "trust": 0.1, "url": "https://open-policy-agent.github.io/gatekeeper/website/docs/howto/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30630" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4450" }, { "trust": 0.1, "url": "https://www.openssl.org/news/secadv/20230207.txt" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-0215" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/openssl" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-0286" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4304" }, { "trust": 0.1, "url": "https://www.debian.org/security/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2097" }, { "db": "PACKETSTORM", "id": "168714" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168213" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "167708" }, { "db": "PACKETSTORM", "id": "168287" }, { "db": "PACKETSTORM", "id": "168347" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "168280" }, { "db": "PACKETSTORM", "id": "170896" }, { "db": "NVD", "id": "CVE-2022-2097" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-2097" }, { "db": "PACKETSTORM", "id": "168714" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "168213" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "167708" }, { "db": "PACKETSTORM", "id": "168287" }, { "db": "PACKETSTORM", "id": "168347" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "168280" }, { "db": "PACKETSTORM", "id": "170896" }, { "db": "NVD", "id": "CVE-2022-2097" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-07-05T00:00:00", "db": "VULMON", "id": "CVE-2022-2097" }, { "date": "2022-10-17T13:44:06", "db": "PACKETSTORM", "id": "168714" }, { "date": "2022-08-25T15:22:18", "db": "PACKETSTORM", "id": "168150" }, { "date": "2022-09-01T16:30:25", "db": "PACKETSTORM", "id": "168213" }, { "date": "2022-09-14T15:08:07", "db": "PACKETSTORM", "id": "168378" }, { "date": "2022-07-06T15:29:36", "db": "PACKETSTORM", "id": "167708" }, { "date": "2022-09-07T17:07:14", "db": "PACKETSTORM", "id": "168287" }, { "date": "2022-09-13T15:29:12", "db": "PACKETSTORM", "id": "168347" }, { "date": "2022-09-07T17:09:04", "db": "PACKETSTORM", "id": "168289" }, { "date": "2022-09-07T16:53:57", "db": "PACKETSTORM", "id": "168280" }, { "date": "2023-02-08T15:58:04", "db": "PACKETSTORM", "id": "170896" }, { "date": "2022-07-05T11:15:08.340000", "db": "NVD", "id": "CVE-2022-2097" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2022-2097" }, { "date": "2024-06-21T19:15:23.083000", "db": "NVD", "id": "CVE-2022-2097" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "167708" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Gentoo Linux Security Advisory 202210-02", "sources": [ { "db": "PACKETSTORM", "id": "168714" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "info disclosure", "sources": [ { "db": "PACKETSTORM", "id": "170896" } ], "trust": 0.1 } }
var-202312-0208
Vulnerability from variot
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). Affected software does not correctly validate the response received by an UMC server. An attacker can use this to crash the affected software by providing and configuring a malicious UMC server or by manipulating the traffic from a legitimate UMC server (i.e. leveraging CVE-2023-48427).
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0208", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48431" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2_update_1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48431" } ] }, "cve": "CVE-2023-48431", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 8.6, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 4.0, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "CHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "productcert@siemens.com", "availabilityImpact": "HIGH", "baseScore": 6.8, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.2, "impactScore": 4.0, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "CHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2023-48431", "trust": 1.0, "value": "HIGH" }, { "author": "productcert@siemens.com", "id": "CVE-2023-48431", "trust": 1.0, "value": "MEDIUM" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48431" }, { "db": "NVD", "id": "CVE-2023-48431" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). Affected software does not correctly validate the response received by an UMC server. An attacker can use this to crash the affected software by providing and configuring a malicious UMC server or by manipulating the traffic from a legitimate UMC server (i.e. leveraging CVE-2023-48427).", "sources": [ { "db": "NVD", "id": "CVE-2023-48431" } ], "trust": 1.0 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "SIEMENS", "id": "SSA-077170", "trust": 1.0 }, { "db": "NVD", "id": "CVE-2023-48431", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48431" } ] }, "id": "VAR-202312-0208", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:38:07.179000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-754", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48431" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.0, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48431" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "NVD", "id": "CVE-2023-48431" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-12-12T12:15:15.777000", "db": "NVD", "id": "CVE-2023-48431" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-12-14T19:37:00.257000", "db": "NVD", "id": "CVE-2023-48431" } ] } }
var-202301-0547
Vulnerability from variot
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially inject commands into the dhcpd configuration of the affected product. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202301-0547", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": "1.0 sp2 update 1" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001790" }, { "db": "NVD", "id": "CVE-2022-45094" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-45094" } ] }, "cve": "CVE-2022-45094", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 8.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.8, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "LOW", "attackVector": "ADJACENT_NETWORK", "author": "productcert@siemens.com", "availabilityImpact": "HIGH", "baseScore": 8.4, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.7, "impactScore": 6.0, "integrityImpact": "HIGH", "privilegesRequired": "HIGH", "scope": "CHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:A/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 8.8, "baseSeverity": "High", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2022-45094", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "Low", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-45094", "trust": 1.8, "value": "HIGH" }, { "author": "productcert@siemens.com", "id": "CVE-2022-45094", "trust": 1.0, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202301-661", "trust": 0.6, "value": "HIGH" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001790" }, { "db": "NVD", "id": "CVE-2022-45094" }, { "db": "NVD", "id": "CVE-2022-45094" }, { "db": "CNNVD", "id": "CNNVD-202301-661" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially inject commands into the dhcpd configuration of the affected product. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state", "sources": [ { "db": "NVD", "id": "CVE-2022-45094" }, { "db": "JVNDB", "id": "JVNDB-2023-001790" } ], "trust": 1.62 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-45094", "trust": 3.2 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 1.6 }, { "db": "JVN", "id": "JVNVU90782730", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-23-017-03", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2023-001790", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-202301-661", "trust": 0.6 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001790" }, { "db": "NVD", "id": "CVE-2022-45094" }, { "db": "CNNVD", "id": "CNNVD-202301-661" } ] }, "id": "VAR-202301-0547", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:07:32.551000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "SSA-332410", "trust": 0.8, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "title": "Siemens SINEC NMS Fixes for command injection vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=221646" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001790" }, { "db": "CNNVD", "id": "CNNVD-202301-661" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-77", "trust": 1.0 }, { "problemtype": "Command injection (CWE-77) [ others ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001790" }, { "db": "NVD", "id": "CVE-2022-45094" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90782730/index.html" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-45094" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-45094/" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001790" }, { "db": "NVD", "id": "CVE-2022-45094" }, { "db": "CNNVD", "id": "CNNVD-202301-661" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "JVNDB", "id": "JVNDB-2023-001790" }, { "db": "NVD", "id": "CVE-2022-45094" }, { "db": "CNNVD", "id": "CNNVD-202301-661" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-05-12T00:00:00", "db": "JVNDB", "id": "JVNDB-2023-001790" }, { "date": "2023-01-10T12:15:23.590000", "db": "NVD", "id": "CVE-2022-45094" }, { "date": "2023-01-10T00:00:00", "db": "CNNVD", "id": "CNNVD-202301-661" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-05-12T04:41:00", "db": "JVNDB", "id": "JVNDB-2023-001790" }, { "date": "2023-01-14T00:43:06.910000", "db": "NVD", "id": "CVE-2022-45094" }, { "date": "2023-01-16T00:00:00", "db": "CNNVD", "id": "CNNVD-202301-661" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202301-661" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "SINEC\u00a0INS\u00a0 Command injection vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001790" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "command injection", "sources": [ { "db": "CNNVD", "id": "CNNVD-202301-661" } ], "trust": 0.6 } }
var-202207-0587
Vulnerability from variot
The llhttp parser <v14.20.1, <v16.17.1 and <v18.9.1 in the http module in Node.js does not correctly parse and validate Transfer-Encoding headers and can lead to HTTP Request Smuggling (HRS). llhttp of llhttp For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update Advisory ID: RHSA-2022:6389-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2022:6389 Issue date: 2022-09-08 CVE Names: CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-33987 ==================================================================== 1. Summary:
An update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now available for Red Hat Software Collections.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64
- Description:
Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.
The following packages have been upgraded to a later upstream version: rh-nodejs14-nodejs (14.20.0).
Security Fix(es):
-
nodejs: DNS rebinding in --inspect via invalid IP addresses (CVE-2022-32212)
-
nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding (CVE-2022-32213)
-
nodejs: HTTP request smuggling due to improper delimiting of header fields (CVE-2022-32214)
-
nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding (CVE-2022-32215)
-
got: missing verification of requested URLs allows redirects to UNIX sockets (CVE-2022-33987)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets 2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses 2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding 2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields 2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding 2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]
- Package List:
Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):
Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm
noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm
ppc64le: rh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm
s390x: rh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm
x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm
Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):
Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm
noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm
x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-32212 https://access.redhat.com/security/cve/CVE-2022-32213 https://access.redhat.com/security/cve/CVE-2022-32214 https://access.redhat.com/security/cve/CVE-2022-32215 https://access.redhat.com/security/cve/CVE-2022-33987 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/ ODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm VScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ bAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF IPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq +62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM 4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M 3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91 BYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI nBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX bcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz hGdWoRKL34w\xcePC -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 9) - aarch64, noarch, ppc64le, s390x, x86_64
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq
Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548
Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.
For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.
We recommend that you upgrade your nodejs packages.
For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29
https://security.gentoo.org/
Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29
Synopsis
Multiple vulnerabilities have been discovered in Node.js.
Background
Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.
Affected packages
Package Vulnerable Unaffected
net-libs/nodejs < 16.20.2 >= 16.20.2
Description
Multiple vulnerabilities have been discovered in Node.js. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Node.js 20 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"
All Node.js 18 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"
All Node.js 16 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"
References
[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202405-29
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0587", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.15.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "16.12.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "37" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.20.1" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "16.17.1" }, { "model": "llhttp", "scope": "lt", "trust": 1.0, "vendor": "llhttp", "version": "2.1.5" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.0.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "14.14.0" }, { "model": "llhttp", "scope": "lt", "trust": 1.0, "vendor": "llhttp", "version": "6.0.7" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.13.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "18.9.1" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "management center", "scope": "lt", "trust": 1.0, "vendor": "stormshield", "version": "3.3.2" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "18.0.0" }, { "model": "llhttp", "scope": "gte", "trust": 1.0, "vendor": "llhttp", "version": "6.0.0" }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "management center", "scope": null, "trust": 0.8, "vendor": "stormshield", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "llhttp", "scope": null, "trust": 0.8, "vendor": "llhttp", "version": null }, { "model": "node.js", "scope": null, "trust": 0.8, "vendor": "node js", "version": null }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013368" }, { "db": "NVD", "id": "CVE-2022-32213" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:llhttp:llhttp:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "2.1.5", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:llhttp:llhttp:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "6.0.7", "versionStartIncluding": "6.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "14.14.0", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "16.12.0", "versionStartIncluding": "16.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "14.20.1", "versionStartIncluding": "14.15.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "16.17.1", "versionStartIncluding": "16.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "18.9.1", "versionStartIncluding": "18.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:37:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:stormshield:stormshield_management_center:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.3.2", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-32213" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" } ], "trust": 0.5 }, "cve": "CVE-2022-32213", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "LOW", "exploitabilityScore": 3.9, "impactScore": 2.5, "integrityImpact": "LOW", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 6.5, "baseSeverity": "Medium", "confidentialityImpact": "Low", "exploitabilityScore": null, "id": "CVE-2022-32213", "impactScore": null, "integrityImpact": "Low", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-32213", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202207-683", "trust": 0.6, "value": "MEDIUM" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013368" }, { "db": "CNNVD", "id": "CNNVD-202207-683" }, { "db": "NVD", "id": "CVE-2022-32213" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The llhttp parser \u003cv14.20.1, \u003cv16.17.1 and \u003cv18.9.1 in the http module in Node.js does not correctly parse and validate Transfer-Encoding headers and can lead to HTTP Request Smuggling (HRS). llhttp of llhttp For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update\nAdvisory ID: RHSA-2022:6389-01\nProduct: Red Hat Software Collections\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6389\nIssue date: 2022-09-08\nCVE Names: CVE-2022-32212 CVE-2022-32213 CVE-2022-32214\n CVE-2022-32215 CVE-2022-33987\n====================================================================\n1. Summary:\n\nAn update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now\navailable for Red Hat Software Collections. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nrh-nodejs14-nodejs (14.20.0). \n\nSecurity Fix(es):\n\n* nodejs: DNS rebinding in --inspect via invalid IP addresses\n(CVE-2022-32212)\n\n* nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n(CVE-2022-32213)\n\n* nodejs: HTTP request smuggling due to improper delimiting of header\nfields (CVE-2022-32214)\n\n* nodejs: HTTP request smuggling due to incorrect parsing of multi-line\nTransfer-Encoding (CVE-2022-32215)\n\n* got: missing verification of requested URLs allows redirects to UNIX\nsockets (CVE-2022-33987)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets\n2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses\n2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding\n2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields\n2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]\n\n6. Package List:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nppc64le:\nrh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm\n\ns390x:\nrh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32212\nhttps://access.redhat.com/security/cve/CVE-2022-32213\nhttps://access.redhat.com/security/cve/CVE-2022-32214\nhttps://access.redhat.com/security/cve/CVE-2022-32215\nhttps://access.redhat.com/security/cve/CVE-2022-33987\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/\nODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm\nVScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ\nbAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF\nIPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq\n+62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM\n4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M\n3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91\nBYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI\nnBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX\nbcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz\nhGdWoRKL34w\\xcePC\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1 security@debian.org\nhttps://www.debian.org/security/ Aron Xu\nJanuary 24, 2023 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : nodejs\nCVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: Node.js: Multiple Vulnerabilities\n Date: May 08, 2024\n Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. \n\nAffected packages\n================\nPackage Vulnerable Unaffected\n--------------- ------------ ------------\nnet-libs/nodejs \u003c 16.20.2 \u003e= 16.20.2\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in Node.js. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2022-32213" }, { "db": "JVNDB", "id": "JVNDB-2022-013368" }, { "db": "VULMON", "id": "CVE-2022-32213" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" } ], "trust": 2.34 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-32213", "trust": 4.0 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 2.4 }, { "db": "HACKERONE", "id": "1524555", "trust": 2.4 }, { "db": "ICS CERT", "id": "ICSA-23-017-03", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU90782730", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-013368", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "168305", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169410", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168442", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168359", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "170727", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2022.3673", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3488", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3505", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3487", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4136", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4101", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3586", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4681", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071827", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071338", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072639", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072522", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071612", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202207-683", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-32213", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168358", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "178512", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-32213" }, { "db": "JVNDB", "id": "JVNDB-2022-013368" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-683" }, { "db": "NVD", "id": "CVE-2022-32213" } ] }, "id": "VAR-202207-0587", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-05-12T03:18:55.457000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-444", "trust": 1.0 }, { "problemtype": "HTTP Request Smuggling (CWE-444) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013368" }, { "db": "NVD", "id": "CVE-2022-32213" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.5, "url": "https://nodejs.org/en/blog/vulnerability/july-2022-security-releases/" }, { "trust": 2.4, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 2.4, "url": "https://hackerone.com/reports/1524555" }, { "trust": 2.4, "url": "https://www.debian.org/security/2023/dsa-5326" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213" }, { "trust": 1.4, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/" }, { "trust": 1.4, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/" }, { "trust": 1.4, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/" }, { "trust": 1.1, "url": "https://access.redhat.com/security/cve/cve-2022-32213" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90782730/" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212" }, { "trust": 0.6, "url": "https://security.netapp.com/advisory/ntap-20220915-0001/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3505" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168305/red-hat-security-advisory-2022-6389-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072522" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168442/red-hat-security-advisory-2022-6595-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168359/red-hat-security-advisory-2022-6448-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4681" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072639" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4101" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3673" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4136" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3487" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071827" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3586" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3488" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071612" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169410/red-hat-security-advisory-2022-6985-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071338" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-32213/" }, { "trust": 0.5, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-32214" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-32212" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-33987" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-32215" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-33987" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3807" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6389" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6985" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29244" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7788" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29244" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7788" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6449" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6448" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/nodejs" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959" }, { "trust": 0.1, "url": "https://security.gentoo.org/glsa/202405-29" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-32213" }, { "db": "JVNDB", "id": "JVNDB-2022-013368" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-683" }, { "db": "NVD", "id": "CVE-2022-32213" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-32213" }, { "db": "JVNDB", "id": "JVNDB-2022-013368" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-683" }, { "db": "NVD", "id": "CVE-2022-32213" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-07T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-013368" }, { "date": "2022-09-08T14:41:32", "db": "PACKETSTORM", "id": "168305" }, { "date": "2022-10-18T22:30:49", "db": "PACKETSTORM", "id": "169410" }, { "date": "2022-09-21T13:47:04", "db": "PACKETSTORM", "id": "168442" }, { "date": "2022-09-13T15:43:41", "db": "PACKETSTORM", "id": "168358" }, { "date": "2022-09-13T15:43:55", "db": "PACKETSTORM", "id": "168359" }, { "date": "2023-01-25T16:09:12", "db": "PACKETSTORM", "id": "170727" }, { "date": "2024-05-09T15:46:44", "db": "PACKETSTORM", "id": "178512" }, { "date": "2022-07-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202207-683" }, { "date": "2022-07-14T15:15:08.287000", "db": "NVD", "id": "CVE-2022-32213" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-07T08:25:00", "db": "JVNDB", "id": "JVNDB-2022-013368" }, { "date": "2023-02-01T00:00:00", "db": "CNNVD", "id": "CNNVD-202207-683" }, { "date": "2023-11-07T03:47:46.473000", "db": "NVD", "id": "CVE-2022-32213" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202207-683" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "llhttp\u00a0 of \u00a0llhttp\u00a0 in products from other multiple vendors \u00a0HTTP\u00a0 Request Smuggling Vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013368" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "environmental issue", "sources": [ { "db": "CNNVD", "id": "CNNVD-202207-683" } ], "trust": 0.6 } }
var-202206-1428
Vulnerability from variot
In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Description:
Submariner enables direct networking between pods and services on different Kubernetes clusters that are either on-premises or in the cloud.
For more information about Submariner, see the Submariner open source community website at: https://submariner.io/.
This advisory contains bug fixes and enhancements to the Submariner container images. Description:
Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.
Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat Ceph Storage Release Notes for information on the most significant of these changes:
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index
All users of Red Hat Ceph Storage are advised to pull these new images from the Red Hat Ecosystem catalog, which provides numerous enhancements and bug fixes. Bugs fixed (https://bugzilla.redhat.com/):
2031228 - CVE-2021-43813 grafana: directory traversal vulnerability 2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources 2115198 - build ceph containers for RHCS 5.2 release
- Summary:
OpenShift API for Data Protection (OADP) 1.1.0 is now available. Description:
OpenShift API for Data Protection (OADP) enables you to back up and restore application resources, persistent volume data, and internal container images to external backup storage. OADP enables both file system-based and snapshot-based backups for persistent volumes. Bugs fixed (https://bugzilla.redhat.com/):
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read
- JIRA issues fixed (https://issues.jboss.org/):
OADP-145 - Restic Restore stuck on InProgress status when app is deployed with DeploymentConfig
OADP-154 - Ensure support for backing up resources based on different label selectors
OADP-194 - Remove the registry dependency from OADP
OADP-199 - Enable support for restore of existing resources
OADP-224 - Restore silently ignore resources if they exist - restore log not updated
OADP-225 - Restore doesn't update velero.io/backup-name when a resource is updated
OADP-234 - Implementation of incremental restore
OADP-324 - Add label to Expired backups failing garbage collection
OADP-382 - 1.1: Update downstream OLM channels to support different x and y-stream releases
OADP-422 - [GCP] An attempt of snapshoting volumes on CSI storageclass using Velero-native snapshots fails because it's unable to find the zone
OADP-423 - CSI Backup is not blocked and does not wait for snapshot to complete
OADP-478 - volumesnapshotcontent cannot be deleted; SnapshotDeleteError Failed to delete snapshot
OADP-528 - The volumesnapshotcontent is not removed for the synced backup
OADP-533 - OADP Backup via Ceph CSI snapshot hangs indefinitely on OpenShift v4.10
OADP-538 - typo on noDefaultBackupLocation error on DPA CR
OADP-552 - Validate OADP with 4.11 and Pod Security Admissions
OADP-558 - Empty Failed Backup CRs can't be removed
OADP-585 - OADP 1.0.3: CSI functionality is broken on OCP 4.11 due to missing v1beta1 API version
OADP-586 - registry deployment still exists on 1.1 build, and the registry pod gets recreated endlessly
OADP-592 - OADP must-gather add support for insecure tls
OADP-597 - BSL validation logs
OADP-598 - Data mover performance on backup blocks backup process
OADP-599 - [Data Mover] Datamover Restic secret cannot be configured per bsl
OADP-600 - Operator should validate volsync installation and raise warning if data mover is enabled
OADP-602 - Support GCP for openshift-velero-plugin registry
OADP-605 - [OCP 4.11] CSI restore fails with admission webhook \"volumesnapshotclasses.snapshot.storage.k8s.io\" denied
OADP-607 - DataMover: VSB is stuck on SnapshotBackupDone
OADP-610 - Data mover fails if a stale volumesnapshot exists in application namespace
OADP-613 - DataMover: upstream documentation refers wrong CRs
OADP-637 - Restic backup fails with CA certificate
OADP-643 - [Data Mover] VSB and VSR names are not unique
OADP-644 - VolumeSnapshotBackup and VolumeSnapshotRestore timeouts should be configurable
OADP-648 - Remove default limits for velero and restic pods
OADP-652 - Data mover VolSync pod errors with Noobaa
OADP-655 - DataMover: volsync-dst-vsr pod completes although not all items where restored in the namespace
OADP-660 - Data mover restic secret does not support Azure
OADP-698 - DataMover: volume-snapshot-mover pod points to upstream image
OADP-715 - Restic restore fails: restic-wait container continuously fails with "Not found: /restores/
- Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/
Security fixes:
- moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)
- vm2: Sandbox Escape in vm2 (CVE-2022-36067)
Bug fixes:
-
Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters (BZ# 2074547)
-
OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constrain (BZ# 2082254)
-
subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec (BZ# 2083659)
-
Yaml editor for creating vSphere cluster moves to next line after typing (BZ# 2086883)
-
Submariner addon status doesn't track all deployment failures (BZ# 2090311)
-
Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret (BZ# 2091170)
-
After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors (BZ# 2095481)
-
Enforce failed and report the violation after modified memory value in limitrange policy (BZ# 2100036)
-
Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" (BZ# 2101577)
-
Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies (BZ# 2102273)
-
managed cluster is in "unknown" state for 120 mins after OADP restore
-
RHACM 2.5.2 images (BZ# 2104553)
-
Subscription UI does not allow binding to label with empty value (BZ# 2104961)
-
Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ# 2106069)
-
Region information is not available for Azure cloud in managedcluster CR (BZ# 2107134)
-
cluster uninstall log points to incorrect container name (BZ# 2107359)
-
ACM shows wrong path for Argo CD applicationset git generator (BZ# 2107885)
-
Single node checkbox not visible for 4.11 images (BZ# 2109134)
-
Unable to deploy hypershift cluster when enabling validate-cluster-security (BZ# 2109544)
-
Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application (BZ# 20110026)
-
After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating (BZ# 2117728)
-
pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)
-
ArgoCD and AppSet Applications do not deploy to local-cluster (BZ# 2124707)
-
Bugs fixed (https://bugzilla.redhat.com/):
2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters 2082254 - OCP 4.11 - Install fails because of: pods "management-ingress-63029-5cf6789dd6-" is forbidden: unable to validate against any security context constraint 2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec 2086883 - Yaml editor for creating vSphere cluster moves to next line after typing 2090311 - Submariner addon status doesn't track all deployment failures 2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret 2095481 - After switching to ACM 2.5 the managed clusters log "unable to create ClusterClaim" errors 2100036 - Enforce failed and report the violation after modified memory value in limitrange policy 2101577 - Creating an application fails with "This application has no subscription match selector (spec.selector.matchExpressions)" 2102273 - Inconsistent cluster resource statuses between "All Subscription" topology and individual topologies 2103653 - managed cluster is in "unknown" state for 120 mins after OADP restore 2104553 - RHACM 2.5.2 images 2104961 - Subscription UI does not allow binding to label with empty value 2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS 2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD 2107134 - Region information is not available for Azure cloud in managedcluster CR 2107359 - cluster uninstall log points to incorrect container name 2107885 - ACM shows wrong path for Argo CD applicationset git generator 2109134 - Single node checkbox not visible for 4.11 images 2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application 2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating 2122292 - pods in CrashLoopBackoff on 3.11 managed cluster 2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster 2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2
Bug Fix(es):
-
Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api (BZ#2033191)
-
Restart of VM Pod causes SSH keys to be regenerated within VM (BZ#2087177)
-
Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR (BZ#2089391)
-
[4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass (BZ#2098225)
-
Fedora version in DataImportCrons is not 'latest' (BZ#2102694)
-
[4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted (BZ#2109407)
-
CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls (BZ#2110562)
-
Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based (BZ#2112643)
-
Unable to start windows VMs on PSI setups (BZ#2115371)
-
[4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 (BZ#2128997)
-
Mark Windows 11 as TechPreview (BZ#2129013)
-
4.11.1 rpms (BZ#2139453)
This advisory contains the following OpenShift Virtualization 4.11.1 images.
RHEL-8-CNV-4.11
virt-cdi-operator-container-v4.11.1-5 virt-cdi-uploadserver-container-v4.11.1-5 virt-cdi-apiserver-container-v4.11.1-5 virt-cdi-importer-container-v4.11.1-5 virt-cdi-controller-container-v4.11.1-5 virt-cdi-cloner-container-v4.11.1-5 virt-cdi-uploadproxy-container-v4.11.1-5 checkup-framework-container-v4.11.1-3 kubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7 kubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7 kubevirt-template-validator-container-v4.11.1-4 virt-handler-container-v4.11.1-5 hostpath-provisioner-operator-container-v4.11.1-4 virt-api-container-v4.11.1-5 vm-network-latency-checkup-container-v4.11.1-3 cluster-network-addons-operator-container-v4.11.1-5 virtio-win-container-v4.11.1-4 virt-launcher-container-v4.11.1-5 ovs-cni-marker-container-v4.11.1-5 hyperconverged-cluster-webhook-container-v4.11.1-7 virt-controller-container-v4.11.1-5 virt-artifacts-server-container-v4.11.1-5 kubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7 kubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7 libguestfs-tools-container-v4.11.1-5 hostpath-provisioner-container-v4.11.1-4 kubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7 kubevirt-tekton-tasks-copy-template-container-v4.11.1-7 cnv-containernetworking-plugins-container-v4.11.1-5 bridge-marker-container-v4.11.1-5 virt-operator-container-v4.11.1-5 hostpath-csi-driver-container-v4.11.1-4 kubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7 kubemacpool-container-v4.11.1-5 hyperconverged-cluster-operator-container-v4.11.1-7 kubevirt-ssp-operator-container-v4.11.1-4 ovs-cni-plugin-container-v4.11.1-5 kubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7 kubevirt-tekton-tasks-operator-container-v4.11.1-2 cnv-must-gather-container-v4.11.1-8 kubevirt-console-plugin-container-v4.11.1-9 hco-bundle-registry-container-v4.11.1-49
- Bugs fixed (https://bugzilla.redhat.com/):
2033191 - Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api 2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression 2070772 - When specifying pciAddress for several SR-IOV NIC they are not correctly propagated to libvirt XML 2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode 2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar 2087177 - Restart of VM Pod causes SSH keys to be regenerated within VM 2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR 2091856 - ?Edit BootSource? action should have more explicit information when disabled 2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add 2098225 - [4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass 2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS 2102694 - Fedora version in DataImportCrons is not 'latest' 2109407 - [4.11] Cloned VM's snapshot restore fails if the source VM disk is deleted 2110562 - CNV introduces a compliance check fail in "ocp4-moderate" profile - routes-protected-by-tls 2112643 - Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based 2115371 - Unable to start windows VMs on PSI setups 2119613 - GiB changes to B in Template's Edit boot source reference modal 2128554 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass 2128872 - [4.11]Can't restore cloned VM 2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24 2129013 - Mark Windows 11 as TechPreview 2129235 - [RFE] Add "Copy SSH command" to VM action list 2134668 - Cannot edit ssh even vm is stopped 2139453 - 4.11.1 rpms
- Bugs fixed (https://bugzilla.redhat.com/):
2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects 2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service 2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY 2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers 2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters 2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps 2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS 2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays 2140597 - CVE-2022-37603 loader-utils:Regular expression denial of service
- JIRA issues fixed (https://issues.jboss.org/):
LOG-2860 - Error on LokiStack Components when forwarding logs to Loki on proxy cluster LOG-3131 - vector: kube API server certificate validation failure due to hostname mismatch LOG-3222 - [release-5.5] fluentd plugin for kafka ca-bundle secret doesn't support multiple CAs LOG-3226 - FluentdQueueLengthIncreasing rule failing to be evaluated. LOG-3284 - [release-5.5][Vector] logs parsed into structured when json is set without structured types. LOG-3287 - [release-5.5] Increase value of cluster-logging PriorityClass to move closer to system-cluster-critical value LOG-3301 - [release-5.5][ClusterLogging] elasticsearchStatus in ClusterLogging instance CR is not updated when Elasticsearch status is changed LOG-3305 - [release-5.5] Kibana Authentication Exception cookie issue LOG-3310 - [release-5.5] Can't choose correct CA ConfigMap Key when creating lokistack in Console LOG-3332 - [release-5.5] Reconcile error on controller when creating LokiStack with tls config
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update Advisory ID: RHSA-2022:8917-01 Product: Red Hat JBoss Web Server Advisory URL: https://access.redhat.com/errata/RHSA-2022:8917 Issue date: 2022-12-12 CVE Names: CVE-2022-1292 CVE-2022-2068 ==================================================================== 1. Summary:
An update is now available for Red Hat JBoss Web Server 5.7.1 on Red Hat Enterprise Linux versions 7, 8, and 9.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat JBoss Web Server 5.7 for RHEL 7 Server - x86_64 Red Hat JBoss Web Server 5.7 for RHEL 8 - x86_64 Red Hat JBoss Web Server 5.7 for RHEL 9 - x86_64
- Description:
Red Hat JBoss Web Server is a fully integrated and certified set of components for hosting Java web applications. It is comprised of the Apache Tomcat Servlet container, JBoss HTTP Connector (mod_cluster), the PicketLink Vault extension for Apache Tomcat, and the Tomcat Native library.
This release of Red Hat JBoss Web Server 5.7.1 serves as a replacement for Red Hat JBoss Web Server 5.7.0. This release includes bug fixes, enhancements and component upgrades, which are documented in the Release Notes, linked to in the References.
Security Fix(es):
-
openssl: c_rehash script allows command injection (CVE-2022-1292)
-
openssl: the c_rehash script allows command injection (CVE-2022-2068)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied.
For details on how to apply this update, refer to:
https://access.redhat.com/articles/11258
- Package List:
Red Hat JBoss Web Server 5.7 for RHEL 7 Server:
Source: jws5-tomcat-native-1.2.31-11.redhat_11.el7jws.src.rpm
x86_64: jws5-tomcat-native-1.2.31-11.redhat_11.el7jws.x86_64.rpm jws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el7jws.x86_64.rpm
Red Hat JBoss Web Server 5.7 for RHEL 8:
Source: jws5-tomcat-native-1.2.31-11.redhat_11.el8jws.src.rpm
x86_64: jws5-tomcat-native-1.2.31-11.redhat_11.el8jws.x86_64.rpm jws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el8jws.x86_64.rpm
Red Hat JBoss Web Server 5.7 for RHEL 9:
Source: jws5-tomcat-native-1.2.31-11.redhat_11.el9jws.src.rpm
x86_64: jws5-tomcat-native-1.2.31-11.redhat_11.el9jws.x86_64.rpm jws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el9jws.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY5dYDtzjgjWX9erEAQihfg/+JKRn1ponld/PXWb0JyTUZp2RsgqRlaoi dFWK8JVr3iIzA8pVUqiy+9fYqvRLvRNv8iyPezTFvlfi70FDLXd58QjxQd2zIcI2 tvwFp3mFYfqT3iEz3PdvhiDpPx9XVeSuXgl8CglshJc4ARkLtdIJzkB6xoWl3fe0 myZzwJChpWzOYvZWZVzPRNzsuAi75pc/y8GwVh+fIlw3iySiskkspGVksXBmoBup XIM0O9ICMJ4jUbNTEZ0AwM6yZX1603sdvW60UarBVjf48vIM8x2ef6h84xEMB/3J eLaUlm5Gm68CQx3Sf+ImCCmYcJ2LmX3KnBMGUhBiQGh2SlEJPKijlrHAhLX7M1YG /yvgd8plwRCAsYTlAJyhcXpBovNtP9io+S4kNy/j/HswvuUcJ+mrJNfZq6AwRnoF cNf2h1+Nl8VlT5YXkbZ0vRW1VbY7L4G1BCiqG2VGdjuOuynXh2URHsdKgs9zHY+5 OMaV16fDbH23t04So+b4hxTsfelUUWEqyKk3qvZESNoFmWPCbaBpzDlawSGEFp5g Ly0SN2cW39creXZ3uYioyMnHKeviSDGX8ik40c7mMYYaGnbgP1mPR8FWu9C3EoWi 0LV3EDSHyFKFxUahjGzKKmjDQtYXPAt9Ci1Vp0OQFhKtAecfmlRZJEZRL4JCgKUd vabHaw7IH20=YAuF -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202206-1428", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sannav", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "santricity smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "fas 8300", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "aff a400", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h615c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "snapmanager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1p" }, { "model": "bootstrap os", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h610c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "3.0.4" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.0.2zf" }, { "model": "solidfire", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "3.0.0" }, { "model": "aff 8300", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "aff 8700", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "smi-s provider", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ontap antivirus connector", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "fas 8700", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "h610s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "fas a400", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "element software", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2068" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "3.0.4", "versionStartIncluding": "3.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1p", "versionStartIncluding": "1.1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.2zf", "versionStartIncluding": "1.0.2", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:santricity_smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:element_software:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:smi-s_provider:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:snapmanager:-:*:*:*:*:hyper-v:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:ontap_antivirus_connector:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:bootstrap_os:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h615c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h615c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h610s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h610s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h610c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h610c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:fas_8300_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:fas_8300:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:fas_8700_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:fas_8700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:fas_a400_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:fas_a400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:aff_8300_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:aff_8300:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:aff_8700_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:aff_8700:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:aff_a400_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:aff_a400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:broadcom:sannav:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2068" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" } ], "trust": 0.9 }, "cve": "CVE-2022-2068", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "COMPLETE", "baseScore": 10.0, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 10.0, "impactScore": 10.0, "integrityImpact": "COMPLETE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "COMPLETE", "baseScore": 10.0, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 10.0, "id": "CVE-2022-2068", "impactScore": 10.0, "integrityImpact": "COMPLETE", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "HIGH", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-2068", "trust": 1.0, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202206-2112", "trust": 0.6, "value": "CRITICAL" }, { "author": "VULMON", "id": "CVE-2022-2068", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "In addition to the c_rehash shell command injection identified in CVE-2022-1292, further circumstances where the c_rehash script does not properly sanitise shell metacharacters to prevent command injection were found by code review. When the CVE-2022-1292 was fixed it was not discovered that there are other places in the script where the file names of certificates being hashed were possibly passed to a command executed through the shell. This script is distributed by some operating systems in a manner where it is automatically executed. On such operating systems, an attacker could execute arbitrary commands with the privileges of the script. Use of the c_rehash script is considered obsolete and should be replaced by the OpenSSL rehash command line tool. Fixed in OpenSSL 3.0.4 (Affected 3.0.0,3.0.1,3.0.2,3.0.3). Fixed in OpenSSL 1.1.1p (Affected 1.1.1-1.1.1o). Fixed in OpenSSL 1.0.2zf (Affected 1.0.2-1.0.2ze). (CVE-2022-2068). Description:\n\nSubmariner enables direct networking between pods and services on different\nKubernetes clusters that are either on-premises or in the cloud. \n\nFor more information about Submariner, see the Submariner open source\ncommunity website at: https://submariner.io/. \n\nThis advisory contains bug fixes and enhancements to the Submariner\ncontainer images. Description:\n\nRed Hat Ceph Storage is a scalable, open, software-defined storage platform\nthat combines the most stable version of the Ceph storage system with a\nCeph management platform, deployment utilities, and support services. \n\nSpace precludes documenting all of these changes in this advisory. Users\nare directed to the Red Hat Ceph Storage Release Notes for information on\nthe most significant of these changes:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index\n\nAll users of Red Hat Ceph Storage are advised to pull these new images from\nthe Red Hat Ecosystem catalog, which provides numerous enhancements and bug\nfixes. Bugs fixed (https://bugzilla.redhat.com/):\n\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2115198 - build ceph containers for RHCS 5.2 release\n\n5. Summary:\n\nOpenShift API for Data Protection (OADP) 1.1.0 is now available. Description:\n\nOpenShift API for Data Protection (OADP) enables you to back up and restore\napplication resources, persistent volume data, and internal container\nimages to external backup storage. OADP enables both file system-based and\nsnapshot-based backups for persistent volumes. Bugs fixed (https://bugzilla.redhat.com/):\n\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2107342 - CVE-2022-30631 golang: compress/gzip: stack exhaustion in Reader.Read\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOADP-145 - Restic Restore stuck on InProgress status when app is deployed with DeploymentConfig\nOADP-154 - Ensure support for backing up resources based on different label selectors\nOADP-194 - Remove the registry dependency from OADP\nOADP-199 - Enable support for restore of existing resources\nOADP-224 - Restore silently ignore resources if they exist - restore log not updated\nOADP-225 - Restore doesn\u0027t update velero.io/backup-name when a resource is updated\nOADP-234 - Implementation of incremental restore\nOADP-324 - Add label to Expired backups failing garbage collection\nOADP-382 - 1.1: Update downstream OLM channels to support different x and y-stream releases\nOADP-422 - [GCP] An attempt of snapshoting volumes on CSI storageclass using Velero-native snapshots fails because it\u0027s unable to find the zone\nOADP-423 - CSI Backup is not blocked and does not wait for snapshot to complete\nOADP-478 - volumesnapshotcontent cannot be deleted; SnapshotDeleteError Failed to delete snapshot\nOADP-528 - The volumesnapshotcontent is not removed for the synced backup\nOADP-533 - OADP Backup via Ceph CSI snapshot hangs indefinitely on OpenShift v4.10\nOADP-538 - typo on noDefaultBackupLocation error on DPA CR\nOADP-552 - Validate OADP with 4.11 and Pod Security Admissions\nOADP-558 - Empty Failed Backup CRs can\u0027t be removed\nOADP-585 - OADP 1.0.3: CSI functionality is broken on OCP 4.11 due to missing v1beta1 API version\nOADP-586 - registry deployment still exists on 1.1 build, and the registry pod gets recreated endlessly\nOADP-592 - OADP must-gather add support for insecure tls\nOADP-597 - BSL validation logs\nOADP-598 - Data mover performance on backup blocks backup process\nOADP-599 - [Data Mover] Datamover Restic secret cannot be configured per bsl\nOADP-600 - Operator should validate volsync installation and raise warning if data mover is enabled\nOADP-602 - Support GCP for openshift-velero-plugin registry\nOADP-605 - [OCP 4.11] CSI restore fails with admission webhook \\\"volumesnapshotclasses.snapshot.storage.k8s.io\\\" denied\nOADP-607 - DataMover: VSB is stuck on SnapshotBackupDone\nOADP-610 - Data mover fails if a stale volumesnapshot exists in application namespace\nOADP-613 - DataMover: upstream documentation refers wrong CRs\nOADP-637 - Restic backup fails with CA certificate\nOADP-643 - [Data Mover] VSB and VSR names are not unique\nOADP-644 - VolumeSnapshotBackup and VolumeSnapshotRestore timeouts should be configurable\nOADP-648 - Remove default limits for velero and restic pods\nOADP-652 - Data mover VolSync pod errors with Noobaa\nOADP-655 - DataMover: volsync-dst-vsr pod completes although not all items where restored in the namespace\nOADP-660 - Data mover restic secret does not support Azure\nOADP-698 - DataMover: volume-snapshot-mover pod points to upstream image\nOADP-715 - Restic restore fails: restic-wait container continuously fails with \"Not found: /restores/\u003cpod-volume\u003e/.velero/\u003crestore-UID\u003e\"\nOADP-716 - Incremental restore: second restore of a namespace partially fails\nOADP-736 - Data mover VSB always fails with volsync 0.5\n\n6. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes:\n\n* moment: inefficient parsing algorithim resulting in DoS (CVE-2022-31129)\n* vm2: Sandbox Escape in vm2 (CVE-2022-36067)\n\nBug fixes:\n\n* Submariner Globalnet e2e tests failed on MTU between On-Prem to Public\nclusters (BZ# 2074547)\n\n* OCP 4.11 - Install fails because of: pods\n\"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate\nagainst any security context constrain (BZ# 2082254)\n\n* subctl gather fails to gather libreswan data if CableDriver field is\nmissing/empty in Submariner Spec (BZ# 2083659)\n\n* Yaml editor for creating vSphere cluster moves to next line after typing\n(BZ# 2086883)\n\n* Submariner addon status doesn\u0027t track all deployment failures (BZ#\n2090311)\n\n* Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn\nwithout including s3 secret (BZ# 2091170)\n\n* After switching to ACM 2.5 the managed clusters log \"unable to create\nClusterClaim\" errors (BZ# 2095481)\n\n* Enforce failed and report the violation after modified memory value in\nlimitrange policy (BZ# 2100036)\n\n* Creating an application fails with \"This application has no subscription\nmatch selector (spec.selector.matchExpressions)\" (BZ# 2101577)\n\n* Inconsistent cluster resource statuses between \"All Subscription\"\ntopology and individual topologies (BZ# 2102273)\n\n* managed cluster is in \"unknown\" state for 120 mins after OADP restore\n\n* RHACM 2.5.2 images (BZ# 2104553)\n\n* Subscription UI does not allow binding to label with empty value (BZ#\n2104961)\n\n* Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD (BZ#\n2106069)\n\n* Region information is not available for Azure cloud in managedcluster CR\n(BZ# 2107134)\n\n* cluster uninstall log points to incorrect container name (BZ# 2107359)\n\n* ACM shows wrong path for Argo CD applicationset git generator (BZ#\n2107885)\n\n* Single node checkbox not visible for 4.11 images (BZ# 2109134)\n\n* Unable to deploy hypershift cluster when enabling\nvalidate-cluster-security (BZ# 2109544)\n\n* Deletion of Application (including app related resources) from the\nconsole fails to delete PlacementRule for the application (BZ# 20110026)\n\n* After the creation by a policy of job or deployment (in case the object\nis missing)ACM is trying to add new containers instead of updating (BZ#\n2117728)\n\n* pods in CrashLoopBackoff on 3.11 managed cluster (BZ# 2122292)\n\n* ArgoCD and AppSet Applications do not deploy to local-cluster (BZ#\n2124707)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2074547 - Submariner Globalnet e2e tests failed on MTU between On-Prem to Public clusters\n2082254 - OCP 4.11 - Install fails because of: pods \"management-ingress-63029-5cf6789dd6-\" is forbidden: unable to validate against any security context constraint\n2083659 - subctl gather fails to gather libreswan data if CableDriver field is missing/empty in Submariner Spec\n2086883 - Yaml editor for creating vSphere cluster moves to next line after typing\n2090311 - Submariner addon status doesn\u0027t track all deployment failures\n2091170 - Unable to deploy Hypershift operator on MCE hub using ManagedClusterAddOn without including s3 secret\n2095481 - After switching to ACM 2.5 the managed clusters log \"unable to create ClusterClaim\" errors\n2100036 - Enforce failed and report the violation after modified memory value in limitrange policy\n2101577 - Creating an application fails with \"This application has no subscription match selector (spec.selector.matchExpressions)\"\n2102273 - Inconsistent cluster resource statuses between \"All Subscription\" topology and individual topologies\n2103653 - managed cluster is in \"unknown\" state for 120 mins after OADP restore\n2104553 - RHACM 2.5.2 images\n2104961 - Subscription UI does not allow binding to label with empty value\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n2106069 - Upgrade to 2.5.1 from 2.5.0 fails due to missing Subscription CRD\n2107134 - Region information is not available for Azure cloud in managedcluster CR\n2107359 - cluster uninstall log points to incorrect container name\n2107885 - ACM shows wrong path for Argo CD applicationset git generator\n2109134 - Single node checkbox not visible for 4.11 images\n2110026 - Deletion of Application (including app related resources) from the console fails to delete PlacementRule for the application\n2117728 - After the creation by a policy of job or deployment (in case the object is missing)ACM is trying to add new containers instead of updating\n2122292 - pods in CrashLoopBackoff on 3.11 managed cluster\n2124707 - ArgoCD and AppSet Applications do not deploy to local-cluster\n2124794 - CVE-2022-36067 vm2: Sandbox Escape in vm2\n\n5. \n\nBug Fix(es):\n\n* Cloning a Block DV to VM with Filesystem with not big enough size comes\nto endless loop - using pvc api (BZ#2033191)\n\n* Restart of VM Pod causes SSH keys to be regenerated within VM\n(BZ#2087177)\n\n* Import gzipped raw file causes image to be downloaded and uncompressed to\nTMPDIR (BZ#2089391)\n\n* [4.11] VM Snapshot Restore hangs indefinitely when backed by a\nsnapshotclass (BZ#2098225)\n\n* Fedora version in DataImportCrons is not \u0027latest\u0027 (BZ#2102694)\n\n* [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is\ndeleted (BZ#2109407)\n\n* CNV introduces a compliance check fail in \"ocp4-moderate\" profile -\nroutes-protected-by-tls (BZ#2110562)\n\n* Nightly build: v4.11.0-578: index format was changed in 4.11 to\nfile-based instead of sqlite-based (BZ#2112643)\n\n* Unable to start windows VMs on PSI setups (BZ#2115371)\n\n* [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity\nrestricted:v1.24 (BZ#2128997)\n\n* Mark Windows 11 as TechPreview (BZ#2129013)\n\n* 4.11.1 rpms (BZ#2139453)\n\nThis advisory contains the following OpenShift Virtualization 4.11.1\nimages. \n\nRHEL-8-CNV-4.11\n\nvirt-cdi-operator-container-v4.11.1-5\nvirt-cdi-uploadserver-container-v4.11.1-5\nvirt-cdi-apiserver-container-v4.11.1-5\nvirt-cdi-importer-container-v4.11.1-5\nvirt-cdi-controller-container-v4.11.1-5\nvirt-cdi-cloner-container-v4.11.1-5\nvirt-cdi-uploadproxy-container-v4.11.1-5\ncheckup-framework-container-v4.11.1-3\nkubevirt-tekton-tasks-wait-for-vmi-status-container-v4.11.1-7\nkubevirt-tekton-tasks-create-datavolume-container-v4.11.1-7\nkubevirt-template-validator-container-v4.11.1-4\nvirt-handler-container-v4.11.1-5\nhostpath-provisioner-operator-container-v4.11.1-4\nvirt-api-container-v4.11.1-5\nvm-network-latency-checkup-container-v4.11.1-3\ncluster-network-addons-operator-container-v4.11.1-5\nvirtio-win-container-v4.11.1-4\nvirt-launcher-container-v4.11.1-5\novs-cni-marker-container-v4.11.1-5\nhyperconverged-cluster-webhook-container-v4.11.1-7\nvirt-controller-container-v4.11.1-5\nvirt-artifacts-server-container-v4.11.1-5\nkubevirt-tekton-tasks-modify-vm-template-container-v4.11.1-7\nkubevirt-tekton-tasks-disk-virt-customize-container-v4.11.1-7\nlibguestfs-tools-container-v4.11.1-5\nhostpath-provisioner-container-v4.11.1-4\nkubevirt-tekton-tasks-disk-virt-sysprep-container-v4.11.1-7\nkubevirt-tekton-tasks-copy-template-container-v4.11.1-7\ncnv-containernetworking-plugins-container-v4.11.1-5\nbridge-marker-container-v4.11.1-5\nvirt-operator-container-v4.11.1-5\nhostpath-csi-driver-container-v4.11.1-4\nkubevirt-tekton-tasks-create-vm-from-template-container-v4.11.1-7\nkubemacpool-container-v4.11.1-5\nhyperconverged-cluster-operator-container-v4.11.1-7\nkubevirt-ssp-operator-container-v4.11.1-4\novs-cni-plugin-container-v4.11.1-5\nkubevirt-tekton-tasks-cleanup-vm-container-v4.11.1-7\nkubevirt-tekton-tasks-operator-container-v4.11.1-2\ncnv-must-gather-container-v4.11.1-8\nkubevirt-console-plugin-container-v4.11.1-9\nhco-bundle-registry-container-v4.11.1-49\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2033191 - Cloning a Block DV to VM with Filesystem with not big enough size comes to endless loop - using pvc api\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2070772 - When specifying pciAddress for several SR-IOV NIC they are not correctly propagated to libvirt XML\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2087177 - Restart of VM Pod causes SSH keys to be regenerated within VM\n2089391 - Import gzipped raw file causes image to be downloaded and uncompressed to TMPDIR\n2091856 - ?Edit BootSource? action should have more explicit information when disabled\n2092793 - CVE-2022-30629 golang: crypto/tls: session tickets lack random ticket_age_add\n2098225 - [4.11] VM Snapshot Restore hangs indefinitely when backed by a snapshotclass\n2100495 - CVE-2021-38561 golang: out-of-bounds read in golang.org/x/text/language leads to DoS\n2102694 - Fedora version in DataImportCrons is not \u0027latest\u0027\n2109407 - [4.11] Cloned VM\u0027s snapshot restore fails if the source VM disk is deleted\n2110562 - CNV introduces a compliance check fail in \"ocp4-moderate\" profile - routes-protected-by-tls\n2112643 - Nightly build: v4.11.0-578: index format was changed in 4.11 to file-based instead of sqlite-based\n2115371 - Unable to start windows VMs on PSI setups\n2119613 - GiB changes to B in Template\u0027s Edit boot source reference modal\n2128554 - The storageclass of VM disk is different from quick created and customize created after changed the default storageclass\n2128872 - [4.11]Can\u0027t restore cloned VM\n2128997 - [4.11.1]virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24\n2129013 - Mark Windows 11 as TechPreview\n2129235 - [RFE] Add \"Copy SSH command\" to VM action list\n2134668 - Cannot edit ssh even vm is stopped\n2139453 - 4.11.1 rpms\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064698 - CVE-2020-36518 jackson-databind: denial of service via a large depth of nested objects\n2113814 - CVE-2022-32189 golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service\n2124669 - CVE-2022-27664 golang: net/http: handle server errors after sending GOAWAY\n2132867 - CVE-2022-2879 golang: archive/tar: unbounded memory consumption when reading headers\n2132868 - CVE-2022-2880 golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters\n2132872 - CVE-2022-41715 golang: regexp/syntax: limit memory used by parsing regexps\n2135244 - CVE-2022-42003 jackson-databind: deep wrapper array nesting wrt UNWRAP_SINGLE_VALUE_ARRAYS\n2135247 - CVE-2022-42004 jackson-databind: use of deeply nested arrays\n2140597 - CVE-2022-37603 loader-utils:Regular expression denial of service\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-2860 - Error on LokiStack Components when forwarding logs to Loki on proxy cluster\nLOG-3131 - vector: kube API server certificate validation failure due to hostname mismatch\nLOG-3222 - [release-5.5] fluentd plugin for kafka ca-bundle secret doesn\u0027t support multiple CAs\nLOG-3226 - FluentdQueueLengthIncreasing rule failing to be evaluated. \nLOG-3284 - [release-5.5][Vector] logs parsed into structured when json is set without structured types. \nLOG-3287 - [release-5.5] Increase value of cluster-logging PriorityClass to move closer to system-cluster-critical value\nLOG-3301 - [release-5.5][ClusterLogging] elasticsearchStatus in ClusterLogging instance CR is not updated when Elasticsearch status is changed\nLOG-3305 - [release-5.5] Kibana Authentication Exception cookie issue\nLOG-3310 - [release-5.5] Can\u0027t choose correct CA ConfigMap Key when creating lokistack in Console\nLOG-3332 - [release-5.5] Reconcile error on controller when creating LokiStack with tls config\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update\nAdvisory ID: RHSA-2022:8917-01\nProduct: Red Hat JBoss Web Server\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:8917\nIssue date: 2022-12-12\nCVE Names: CVE-2022-1292 CVE-2022-2068\n====================================================================\n1. Summary:\n\nAn update is now available for Red Hat JBoss Web Server 5.7.1 on Red Hat\nEnterprise Linux versions 7, 8, and 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat JBoss Web Server 5.7 for RHEL 7 Server - x86_64\nRed Hat JBoss Web Server 5.7 for RHEL 8 - x86_64\nRed Hat JBoss Web Server 5.7 for RHEL 9 - x86_64\n\n3. Description:\n\nRed Hat JBoss Web Server is a fully integrated and certified set of\ncomponents for hosting Java web applications. It is comprised of the Apache\nTomcat Servlet container, JBoss HTTP Connector (mod_cluster), the\nPicketLink Vault extension for Apache Tomcat, and the Tomcat Native\nlibrary. \n\nThis release of Red Hat JBoss Web Server 5.7.1 serves as a replacement for\nRed Hat JBoss Web Server 5.7.0. This release includes bug fixes,\nenhancements and component upgrades, which are documented in the Release\nNotes, linked to in the References. \n\nSecurity Fix(es):\n\n* openssl: c_rehash script allows command injection (CVE-2022-1292)\n\n* openssl: the c_rehash script allows command injection (CVE-2022-2068)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. \n\nFor details on how to apply this update, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Package List:\n\nRed Hat JBoss Web Server 5.7 for RHEL 7 Server:\n\nSource:\njws5-tomcat-native-1.2.31-11.redhat_11.el7jws.src.rpm\n\nx86_64:\njws5-tomcat-native-1.2.31-11.redhat_11.el7jws.x86_64.rpm\njws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el7jws.x86_64.rpm\n\nRed Hat JBoss Web Server 5.7 for RHEL 8:\n\nSource:\njws5-tomcat-native-1.2.31-11.redhat_11.el8jws.src.rpm\n\nx86_64:\njws5-tomcat-native-1.2.31-11.redhat_11.el8jws.x86_64.rpm\njws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el8jws.x86_64.rpm\n\nRed Hat JBoss Web Server 5.7 for RHEL 9:\n\nSource:\njws5-tomcat-native-1.2.31-11.redhat_11.el9jws.src.rpm\n\nx86_64:\njws5-tomcat-native-1.2.31-11.redhat_11.el9jws.x86_64.rpm\njws5-tomcat-native-debuginfo-1.2.31-11.redhat_11.el9jws.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY5dYDtzjgjWX9erEAQihfg/+JKRn1ponld/PXWb0JyTUZp2RsgqRlaoi\ndFWK8JVr3iIzA8pVUqiy+9fYqvRLvRNv8iyPezTFvlfi70FDLXd58QjxQd2zIcI2\ntvwFp3mFYfqT3iEz3PdvhiDpPx9XVeSuXgl8CglshJc4ARkLtdIJzkB6xoWl3fe0\nmyZzwJChpWzOYvZWZVzPRNzsuAi75pc/y8GwVh+fIlw3iySiskkspGVksXBmoBup\nXIM0O9ICMJ4jUbNTEZ0AwM6yZX1603sdvW60UarBVjf48vIM8x2ef6h84xEMB/3J\neLaUlm5Gm68CQx3Sf+ImCCmYcJ2LmX3KnBMGUhBiQGh2SlEJPKijlrHAhLX7M1YG\n/yvgd8plwRCAsYTlAJyhcXpBovNtP9io+S4kNy/j/HswvuUcJ+mrJNfZq6AwRnoF\ncNf2h1+Nl8VlT5YXkbZ0vRW1VbY7L4G1BCiqG2VGdjuOuynXh2URHsdKgs9zHY+5\nOMaV16fDbH23t04So+b4hxTsfelUUWEqyKk3qvZESNoFmWPCbaBpzDlawSGEFp5g\nLy0SN2cW39creXZ3uYioyMnHKeviSDGX8ik40c7mMYYaGnbgP1mPR8FWu9C3EoWi\n0LV3EDSHyFKFxUahjGzKKmjDQtYXPAt9Ci1Vp0OQFhKtAecfmlRZJEZRL4JCgKUd\nvabHaw7IH20=YAuF\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n", "sources": [ { "db": "NVD", "id": "CVE-2022-2068" }, { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" } ], "trust": 1.8 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-2068", "trust": 2.6 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-22-319-01", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168022", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168351", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168378", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "170197", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "167713", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168204", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "167948", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168284", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168538", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168112", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168222", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168182", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "167564", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168187", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "168387", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "169443", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2023.1430", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3269", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3109", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5961", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3355", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.6290", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4296", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4122", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4568", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4099", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4747", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3145", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4167", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4233", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4669", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.6434", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4323", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3034", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3977", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3814", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4525", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4601", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5247", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022070615", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022070209", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022062906", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022070434", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071151", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022070712", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202206-2112", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-2068", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168265", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168228", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168289", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170083", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "170162", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "id": "VAR-202206-1428", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.416330645 }, "last_update_date": "2024-07-23T19:47:22.503000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "OpenSSL Fixes for operating system command injection vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=197983" }, { "title": "Debian Security Advisories: DSA-5169-1 openssl -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=6b57464ee127384d3d853e9cc99cf350" }, { "title": "Amazon Linux AMI: ALAS-2022-1626", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2022-1626" }, { "title": "Debian CVElist Bug Report Logs: openssl: CVE-2022-2097", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=740b837c53d462fc86f3cb0849b86ca0" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2022-2068" }, { "title": "Amazon Linux 2: ALAS2-2022-1832", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2022-1832" }, { "title": "Amazon Linux 2: ALAS2-2022-1831", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2022-1831" }, { "title": "Amazon Linux 2: ALASOPENSSL-SNAPSAFE-2023-001", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alasopenssl-snapsafe-2023-001" }, { "title": "Red Hat: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-2068" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228917 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat JBoss Web Server 5.7.1 release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228913 - security advisory" }, { "title": "Red Hat: Moderate: openssl security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225818 - security advisory" }, { "title": "Red Hat: Important: Red Hat Satellite Client security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20235982 - security advisory" }, { "title": "Red Hat: Moderate: openssl security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226224 - security advisory" }, { "title": "Red Hat: Important: Release of containers for OSP 16.2.z director operator tech preview", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226517 - security advisory" }, { "title": "Red Hat: Important: Self Node Remediation Operator 0.4.1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226184 - security advisory" }, { "title": "Red Hat: Important: Satellite 6.11.5.6 async security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20235980 - security advisory" }, { "title": "Amazon Linux 2022: ALAS2022-2022-123", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-123" }, { "title": "Red Hat: Important: Satellite 6.12.5.2 Async Security Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20235979 - security advisory" }, { "title": "Red Hat: Critical: Multicluster Engine for Kubernetes 2.0.2 security and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226422 - security advisory" }, { "title": "Brocade Security Advisories: Access Denied", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=brocade_security_advisories\u0026qid=8efbc4133194fcddd0bca99df112b683" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.11.1 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226103 - security advisory" }, { "title": "Amazon Linux 2022: ALAS2022-2022-195", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-195" }, { "title": "Red Hat: Important: Node Maintenance Operator 4.11.1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226188 - security advisory" }, { "title": "Red Hat: Moderate: Openshift Logging Security and Bug Fix update (5.3.11)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226182 - security advisory" }, { "title": "Red Hat: Important: Logging Subsystem 5.5.0 - Red Hat OpenShift security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226051 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.2.2 Containers security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226283 - security advisory" }, { "title": "Red Hat: Moderate: Logging Subsystem 5.4.5 Security and Bug Fix Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226183 - security advisory" }, { "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.5.2 security fixes and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226507 - security advisory" }, { "title": "Red Hat: Moderate: RHOSDT 2.6.0 operator/operand containers Security Update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227055 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift sandboxed containers 1.3.1 security fix and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227058 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228840 - security advisory" }, { "title": "Red Hat: Moderate: New container image for Red Hat Ceph Storage 5.2 Security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226024 - security advisory" }, { "title": "Red Hat: Moderate: RHACS 3.72 enhancement and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226714 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.1.0 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226290 - security advisory" }, { "title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security and container updates", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226348 - security advisory" }, { "title": "Red Hat: Moderate: Multicluster Engine for Kubernetes 2.1 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226345 - security advisory" }, { "title": "Red Hat: Important: Red Hat JBoss Core Services Apache HTTP Server 2.4.51 SP1 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228841 - security advisory" }, { "title": "Red Hat: Moderate: RHSA: Submariner 0.13 - security and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226346 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift API for Data Protection (OADP) 1.0.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226430 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.6.0 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226370 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.12 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226271 - security advisory" }, { "title": "Red Hat: Critical: Red Hat Advanced Cluster Management 2.4.6 security update and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226696 - security advisory" }, { "title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226156 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Virtualization 4.11.1 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228750 - security advisory" }, { "title": "Red Hat: Important: OpenShift Virtualization 4.11.0 Images security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226526 - security advisory" }, { "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory" }, { "title": "Red Hat: Important: OpenShift Virtualization 4.12.0 Images security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230408 - security advisory" }, { "title": "Red Hat: Moderate: Openshift Logging 5.3.14 bug fix release and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228889 - security advisory" }, { "title": "Red Hat: Moderate: Logging Subsystem 5.5.5 - Red Hat OpenShift security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228781 - security advisory" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory" }, { "title": "Smart Check Scan-Report", "trust": 0.1, "url": "https://github.com/mawinkler/c1-cs-scan-result " }, { "title": "Repository with scripts to verify system against CVE", "trust": 0.1, "url": "https://github.com/backloop-biz/vulnerability_checker " }, { "title": "https://github.com/jntass/TASSL-1.1.1", "trust": 0.1, "url": "https://github.com/jntass/tassl-1.1.1 " }, { "title": "Repository with scripts to verify system against CVE", "trust": 0.1, "url": "https://github.com/backloop-biz/cve_checks " }, { "title": "https://github.com/tianocore-docs/ThirdPartySecurityAdvisories", "trust": 0.1, "url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories " }, { "title": "OpenSSL-CVE-lib", "trust": 0.1, "url": "https://github.com/chnzzh/openssl-cve-lib " }, { "title": "The Register", "trust": 0.1, "url": "https://www.theregister.co.uk/2022/06/27/openssl_304_memory_corruption_bug/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-78", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2022-2068" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://www.debian.org/security/2022/dsa-5169" }, { "trust": 1.7, "url": "https://www.openssl.org/news/secadv/20220621.txt" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20220707-0008/" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=2c9c35870601b4a44d86ddbf512b38df38285cfa" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=9639817dac8bbbaa64d09efad7464ccc405527c7" }, { "trust": 1.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=7a9c027159fe9e1bbc2cd38a8a2914bff0d5abd9" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/6wzzbkuhqfgskgnxxkicsrpl7amvw5m5/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.8, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-32206" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-32208" }, { "trust": 0.6, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=9639817dac8bbbaa64d09efad7464ccc405527c7" }, { "trust": 0.6, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/6wzzbkuhqfgskgnxxkicsrpl7amvw5m5/" }, { "trust": 0.6, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=2c9c35870601b4a44d86ddbf512b38df38285cfa" }, { "trust": 0.6, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=7a9c027159fe9e1bbc2cd38a8a2914bff0d5abd9" }, { "trust": 0.6, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/vcmnwkerpbkoebnl7clttx3zzczlh7xa/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4747" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3977" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4669" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170197/red-hat-security-advisory-2022-8917-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3814" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168538/red-hat-security-advisory-2022-6696-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/167948/red-hat-security-advisory-2022-5818-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168222/red-hat-security-advisory-2022-6283-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022062906" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168182/red-hat-security-advisory-2022-6184-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.6290" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168204/red-hat-security-advisory-2022-6224-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4099" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4296" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4233" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.6434" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3145" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022070209" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168378/red-hat-security-advisory-2022-6507-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5247" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5961" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3269" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/167713/ubuntu-security-notice-usn-5488-2.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3109" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-2068/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168112/red-hat-security-advisory-2022-6051-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071151" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168187/red-hat-security-advisory-2022-6188-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168284/red-hat-security-advisory-2022-6183-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2023.1430" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-319-01" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168351/red-hat-security-advisory-2022-6430-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4167" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/167564/ubuntu-security-notice-usn-5488-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3034" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022070615" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168022/red-hat-security-advisory-2022-6024-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4122" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4323" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3355" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022070434" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4525" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169443/red-hat-security-advisory-2022-7058-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022070712" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4568" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168387/red-hat-security-advisory-2022-6517-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4601" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-29154" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-30629" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2526" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25314" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-30631" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25313" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-2526" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785" }, { "trust": 0.4, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29154" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-38561" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-32148" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1962" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30630" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1705" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1705" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38561" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1962" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26691" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24675" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21698" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.2, "url": "https://issues.jboss.org/):" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2016-3709" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1304" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26700" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26716" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26710" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-2509" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22629" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26719" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26717" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22662" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27404" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-3709" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-34903" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22624" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-3515" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35525" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-37434" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27406" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35525" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35527" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-26709" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22628" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27405" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35527" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-30293" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/78.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://github.com/backloop-biz/vulnerability_checker" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-319-01" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/alas-2022-1626.html" }, { "trust": 0.1, "url": "https://submariner.io/getting-started/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30635" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28131" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28131" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30633" }, { "trust": 0.1, "url": "https://submariner.io/." }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30632" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30629" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.6/html/add-ons/submariner#submariner-deploy-console" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5.2/html-single/release_notes/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/1548993" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/2789521" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6024" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6430" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6290" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28327" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6507" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#critical" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32250" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-31129" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-36067" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-31129" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-release-notes.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6182" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30631" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0308" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38177" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0308" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30698" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30699" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0256" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2015-20107" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1304" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0256" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25310" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2015-20107" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-40674" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24795" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38178" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25308" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22844" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28390" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30002" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21619" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24448" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27950" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3640" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/release_notes/ocp-4-11-release-notes.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36558" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0854" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-20368" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0617" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0562" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2586" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8781" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25255" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-41715" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21624" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0168" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0865" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28893" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0854" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3640" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21618" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2879" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2078" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0891" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0617" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.11/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21626" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-39399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-36946" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0562" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42003" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1055" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26373" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2938" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1355" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32189" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1048" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0561" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0924" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2880" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23960" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0908" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29581" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0561" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1184" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21499" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2639" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21628" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-42004" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27664" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-37603" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8917" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-2068" }, { "db": "PACKETSTORM", "id": "168265" }, { "db": "PACKETSTORM", "id": "168022" }, { "db": "PACKETSTORM", "id": "168351" }, { "db": "PACKETSTORM", "id": "168228" }, { "db": "PACKETSTORM", "id": "168378" }, { "db": "PACKETSTORM", "id": "168289" }, { "db": "PACKETSTORM", "id": "170083" }, { "db": "PACKETSTORM", "id": "170162" }, { "db": "PACKETSTORM", "id": "170197" }, { "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "db": "NVD", "id": "CVE-2022-2068" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-06-21T00:00:00", "db": "VULMON", "id": "CVE-2022-2068" }, { "date": "2022-09-07T16:37:33", "db": "PACKETSTORM", "id": "168265" }, { "date": "2022-08-10T15:50:41", "db": "PACKETSTORM", "id": "168022" }, { "date": "2022-09-13T15:41:58", "db": "PACKETSTORM", "id": "168351" }, { "date": "2022-09-01T16:34:06", "db": "PACKETSTORM", "id": "168228" }, { "date": "2022-09-14T15:08:07", "db": "PACKETSTORM", "id": "168378" }, { "date": "2022-09-07T17:09:04", "db": "PACKETSTORM", "id": "168289" }, { "date": "2022-12-02T15:57:08", "db": "PACKETSTORM", "id": "170083" }, { "date": "2022-12-08T16:34:22", "db": "PACKETSTORM", "id": "170162" }, { "date": "2022-12-12T23:02:33", "db": "PACKETSTORM", "id": "170197" }, { "date": "2022-06-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "date": "2022-06-21T15:15:09.060000", "db": "NVD", "id": "CVE-2022-2068" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2022-2068" }, { "date": "2023-03-09T00:00:00", "db": "CNNVD", "id": "CNNVD-202206-2112" }, { "date": "2023-11-07T03:46:11.177000", "db": "NVD", "id": "CVE-2022-2068" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202206-2112" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL Operating system command injection vulnerability", "sources": [ { "db": "CNNVD", "id": "CNNVD-202206-2112" } ], "trust": 0.6 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "operating system commend injection", "sources": [ { "db": "CNNVD", "id": "CNNVD-202206-2112" } ], "trust": 0.6 } }
var-202102-1466
Vulnerability from variot
Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function. Lodash Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or vendor announcements. Description:
The ovirt-engine package provides the manager for virtualization environments. This manager enables admins to define hosts and networks, as well as to add storage, create VMs and manage user permissions.
Bug Fix(es):
-
This release adds the queue attribute to the virtio-scsi driver in the virtual machine configuration. This improvement enables multi-queue performance with the virtio-scsi driver. (BZ#911394)
-
With this release, source-load-balancing has been added as a new sub-option for xmit_hash_policy. It can be configured for bond modes balance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying xmit_hash_policy=vlan+srcmac. (BZ#1683987)
-
The default DataCenter/Cluster will be set to compatibility level 4.6 on new installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)
-
With this release, support has been added for copying disks between regular Storage Domains and Managed Block Storage Domains. It is now possible to migrate disks between Managed Block Storage Domains and regular Storage Domains. (BZ#1906074)
-
Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was set by default to false and was supposed to be uses in cluster compatibility levels below 4.4. The value was set to general version. With this release, each cluster level has it's own value, defaulting to false for 4.4 and above. This will reduce unnecessary overhead in removing time outs of the file system freeze command. (BZ#1932284)
-
With this release, running virtual machines is supported for up to 16TB of RAM on x86_64 architectures. (BZ#1944723)
-
This release adds the gathering of oVirt/RHV related certificates to allow easier debugging of issues for faster customer help and issue resolution. Information from certificates is now included as part of the sosreport. Note that no corresponding private key information is gathered, due to security considerations. (BZ#1845877)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/2974891
- Bugs fixed (https://bugzilla.redhat.com/):
1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine 1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors 1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain 1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine 1717411 - improve engine logging when migration fail 1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs 1775145 - Incorrect message from hot-plugging memory 1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. 1845877 - [RFE] Collect information about RHV PKI 1875363 - engine-setup failing on FIPS enabled rhel8 machine 1906074 - [RFE] Support disks copy between regular and managed block storage domains 1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration 1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning 1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. 1919984 - engine-setup failse to deploy the grafana service in an external DWH server 1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal 1926018 - Failed to run VM after FIPS mode is enabled 1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing 'rsyslog-gnutls' package. 1928158 - Rename 'CA Certificate' link in welcome page to 'Engine CA certificate' 1928188 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929211 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error "missing groups or modules: virt:8.4" 1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful 1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured 1932284 - Engine handled FS freeze is not fast enough for Windows systems 1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed 1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2 1943267 - Snapshot creation is failing for VM having vGPU. 1944723 - [RFE] Support virtual machines with 16TB memory 1948577 - [welcome page] remove "Infrastructure Migration" section (obsoleted) 1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule 1949547 - rhv-log-collector-analyzer report contains 'b characters 1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6 1950466 - Host installation failed 1954401 - HP VMs pinning is wiped after edit->ok and pinned to first physical CPUs. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.8.2 bug fix and security update Advisory ID: RHSA-2021:2438-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2021:2438 Issue date: 2021-07-27 CVE Names: CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 CVE-2021-33909 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.8.2 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.8.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.8.2. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2021:2437
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html
Security Fix(es):
-
SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) (CVE-2016-2183)
-
gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
-
nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)
-
etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)
-
etcd: DoS in wal/wal.go (CVE-2020-15112)
-
etcd: directories created via os.MkdirAll are not checked for permissions (CVE-2020-15113)
-
etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS (CVE-2020-15114)
-
etcd: no authentication is performed against endpoints provided in the
-
--endpoints flag (CVE-2020-15136)
-
jwt-go: access restriction bypass vulnerability (CVE-2020-26160)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
golang: crypto/elliptic: incorrect operations on the P-224 curve (CVE-2021-3114)
-
containernetworking-cni: Arbitrary path injection via type field in CNI configuration (CVE-2021-20206)
-
containers/storage: DoS via malicious image (CVE-2021-20291)
-
prometheus: open redirect under the /new endpoint (CVE-2021-29622)
-
golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)
-
go.elastic.co/apm: leaks sensitive HTTP headers during panic (CVE-2021-22133)
Space precludes listing in detail the following additional CVEs fixes: (CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382), (CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and (CVE-2021-23368)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-x86_64
The image digest is ssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-s390x
The image digest is sha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le
The image digest is sha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f
All OpenShift Container Platform 4.8 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor
- Solution:
For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)
1725981 - oc explain does not work well with full resource.group names
1747270 - [osp] Machine with name "operator-sdk init --help
1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard
1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert
1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions
1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host
1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions
1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go
1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS
1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag
1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method
1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics
1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly
1872659 - ClusterAutoscaler doesn't scale down when a node is not needed anymore
1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack
1873649 - proxy.config.openshift.io should validate user inputs
1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials
1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable
1876918 - scheduler test leaves taint behind
1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1
1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable
1878685 - Ingress resource with "Passthrough" annotation does not get applied when using the newer "networking.k8s.io/v1" API
1879077 - Nodes tainted after configuring additional host iface
1879140 - console auth errors not understandable by customers
1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens
1879184 - CVO must detect or log resource hotloops
1879495 - [4.6] namespace \“openshift-user-workload-monitoring\” does not exist”
1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string
1879944 - [OCP 4.8] Slow PV creation with vsphere
1880757 - AWS: master not removed from LB/target group when machine deleted
1880758 - Component descriptions in cloud console have bad description (Managed by Terraform)
1881210 - nodePort for router-default metrics with NodePortService does not exist
1881481 - CVO hotloops on some service manifests
1881484 - CVO hotloops on deployment manifests
1881514 - CVO hotloops on imagestreams from cluster-samples-operator
1881520 - CVO hotloops on (some) clusterrolebindings
1881522 - CVO hotloops on clusterserviceversions packageserver
1881662 - Error getting volume limit for plugin kubernetes.io/oc image extract
1904505 - Excessive Memory Use in Builds
1904507 - vsphere-problem-detector: implement missing metrics
1904558 - Random init-p error when trying to start pod
1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests
1905147 - ConsoleQuickStart Card's prerequisites is a combined text instead of a list
1905159 - Installation on previous unused dasd fails after formatting
1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory
1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails
1905577 - Control plane machines not adopted when provisioning network is disabled
1905627 - Warn users when using an unsupported browser such as IE
1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP
1905849 - Default volumesnapshotclass should be created when creating default storageclass
1906056 - Bundles skipped via the skips
field cannot be pinned
1906102 - CBO produces standard metrics
1906147 - ironic-rhcos-downloader should not use --insecure
1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart
1906740 - [aws]Machine should be "Failed" when creating a machine with invalid region
1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage
1907315 - the internal load balancer annotation for AWS should use "true" instead of "0.0.0.0/0" as value
1907353 - [4.8] OVS daemonset is wasting resources even though it doesn't do anything
1907614 - Update kubernetes deps to 1.20
1908068 - Enable DownwardAPIHugePages feature gate
1908169 - The example of Import URL is "Fedora cloud image list" for all templates.
1908170 - sriov network resource injector: Hugepage injection doesn't work with mult container
1908343 - Input labels in Manage columns modal should be clickable
1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures
1908655 - "Evaluating rule failed" for "record: node:node_num_cpu:sum" rule
1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes
1908765 - [SCALE] enable OVN lflow data path groups
1908774 - [SCALE] enable OVN DB memory trimming on compaction
1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it
1909091 - Pod/node/ip/template isn't showing when vm is running
1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error
1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing
1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade
1910067 - UPI: openstacksdk fails on "server group list"
1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing
1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn't match node selector: AWS compute machines without status
1910378 - socket timeouts for webservice communication between pods
1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling
1910500 - Could not list CSI provisioner on web when create storage class on GCP platform
1911211 - Should show the cert-recovery-controller version correctly
1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames
1912571 - libvirt: Support setting dnsmasq options through the install config
1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade
1913112 - BMC details should be optional for unmanaged hosts
1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag
1913341 - GCP: strange cluster behavior in CI run
1913399 - switch to v1beta1 for the priority and fairness APIs
1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint
1913532 - After a 4.6 to 4.7 upgrade, a node went unready
1913974 - snapshot test periodically failing with "can't open '/mnt/test/data': No such file or directory"
1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs
1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root
1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20
1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names
1915693 - Not able to install gpu-operator on cpumanager enabled node.
1915971 - Role and Role Binding breadcrumbs do not work as expected
1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page
1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall
1916392 - scrape priority and fairness endpoints for must-gather
1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form
1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with "Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready"
1916553 - Default template's description is empty on details tab
1916593 - Destroy cluster sometimes stuck in a loop
1916872 - need ability to reconcile exgw annotations on pod add
1916890 - [OCP 4.7] api or api-int not available during installation
1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs.
1917282 - [Migration] MCO stucked for rhel worker after enable the migration prepare state
1917328 - It should default to current namespace when create vm from template action on details page
1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with "cannot go from state 'deploy failed' to state 'manageable'"
1917485 - [oVirt] ovirt machine/machineset object has missing some field validations
1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube.
1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3
1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library
1918101 - [vsphere]Delete Provisioning machine took about 12 minutes
1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass
1918442 - Service Reject ACL does not work on dualstack
1918723 - installer fails to write boot record on 4k scsi lun on s390x
1918729 - Add hide/reveal button for the token field in the KMS configuration page
1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve
1918785 - Pod request and limit calculations in console are incorrect
1918910 - Scale from zero annotations should not requeue if instance type missing
1919032 - oc image extract - will not extract files from image rootdir - "error: unexpected directory from mapping tests.test"
1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0
1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone
1919168 - oc adm catalog mirror
doesn't work for the air-gapped cluster
1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize
1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster
1919356 - Add missing profile annotation in cluster-update-keys manifests
1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration
1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic
1919406 - OperatorHub filter heading "Provider Type" should be "Source"
1919737 - hostname lookup delays when master node down
1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade
1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests
1920300 - cri-o does not support configuration of stream idle time
1920307 - "VM not running" should be "Guest agent required" on vm details page in dev console
1920532 - Problem in trying to connect through the service to a member that is the same as the caller.
1920677 - Various missingKey errors in the devconsole namespace
1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources
1920901 - [4.7]"500 Internal Error" for prometheus route in https_proxy cluster
1920903 - oc adm top reporting unknown status for Windows node
1920905 - Remove DNS lookup workaround from cluster-api-provider
1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard
1921184 - kuryr-cni binds to wrong interface on machine with two interfaces
1921227 - Fix issues related to consuming new extensions in Console static plugins
1921264 - Bundle unpack jobs can hang indefinitely
1921267 - ResourceListDropdown not internationalized
1921321 - SR-IOV obliviously reboot the node
1921335 - ThanosSidecarUnhealthy
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel]
1921763 - operator registry has high memory usage in 4.7... cleanup row closes
1921778 - Push to stage now failing with semver issues on old releases
1921780 - Search page not fully internationalized
1921781 - DefaultList component not internationalized
1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes
1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often
1921892 - MAO: controller runtime manager closes event recorder
1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated
1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label
1921953 - ClusterServiceVersion property inference does not infer package and version
1922063 - "Virtual Machine" should be "Templates" in template wizard
1922065 - Rootdisk size is default to 15GiB in customize wizard
1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch
1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted
1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt
1922646 - Panic in authentication-operator invoking webhook authorization
1922648 - FailedCreatePodSandBox due to "failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists"
1922764 - authentication operator is degraded due to number of kube-apiservers
1922992 - some button text on YAML sidebar are not translated
1922997 - [Migration]The SDN migration rollback failed.
1923038 - [OSP] Cloud Info is loaded twice
1923157 - Ingress traffic performance drop due to NodePort services
1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set.
1923811 - Registry claims Available=True despite .status.readyReplicas == 0 while .spec.replicas == 2
1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors
1923984 - Incorrect anti-affinity for UWM prometheus
1924020 - panic: runtime error: index out of range [0] with length 0
1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true
1924083 - "Activity" Pane of Persistent Storage tab shows events related to Noobaa too
1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable
1924171 - ovn-kube must handle single-stack to dual-stack migration
1924358 - metal UPI setup fails, no worker nodes
1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument
1924536 - 'More about Insights' link points to support link
1924585 - "Edit Annotation" are not correctly translated in Chinese
1924586 - Control Plane status and Operators status are not fully internationalized
1924641 - [User Experience] The message "Missing storage class" needs to be displayed after user clicks Next and needs to be rephrased
1924663 - Insights operator should collect related pod logs when operator is degraded
1924701 - Cluster destroy fails when using byo with Kuryr
1924728 - Difficult to identify deployment issue if the destination disk is too small
1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086)
1924747 - InventoryItem doesn't internationalize resource kind
1924788 - Not clear error message when there are no NADs available for the user
1924816 - Misleading error messages in ironic-conductor log
1924869 - selinux avc deny after installing OCP 4.7
1924916 - PVC reported as Uploading when it is actually cloning
1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces
1924953 - newly added 'excessive etcd leader changes' test case failing in serial job
1924968 - Monitoring list page filter options are not translated
1924983 - some components in utils directory not localized
1925017 - [UI] VM Details-> Network Interfaces, 'Name,' is displayed instead on 'Name'
1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -> 4.7 upgrade due to series churn
1925083 - Some texts are not marked for translation on idp creation page.
1925087 - Add i18n support for the Secret page
1925148 - Shouldn't create the redundant imagestream when use oc new-app --name=testapp2 -i
with exist imagestream
1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard
1925216 - openshift installer fails immediately failed to fetch Install Config
1925236 - OpenShift Route targets every port of a multi-port service
1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload's service
1925261 - Items marked as mandatory in KMS Provider form are not enforced
1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot
1925343 - [ci] e2e-metal tests are not using reserved instances
1925493 - Enable snapshot e2e tests
1925586 - cluster-etcd-operator is leaking transports
1925614 - Error: InstallPlan.operators.coreos.com not found
1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers
1926029 - [RFE] Either disable save or give warning when no disks support snapshot
1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists.
1926072 - Close button (X) does not work in the new "Storage cluster exists" Warning alert message(introduced via fix for Bug 1867400)
1926082 - Insights operator should not go degraded during upgrade
1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized
1926115 - Texts in “Insights” popover on overview page are not marked for i18n
1926123 - Pseudo bug: revert "force cert rotation every couple days for development" in 4.7
1926126 - some kebab/action menu translation issues
1926131 - Add HPA page is not fully internationalized
1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it
1926154 - Create new pool with arbiter - wrong replica
1926278 - [oVirt] consume K8S 1.20 packages
1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning
1926285 - ignore pod not found status messages
1926289 - Accessibility: Modal content hidden from screen readers
1926310 - CannotRetrieveUpdates alerts on Critical severity
1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus.
1926336 - Service details can overflow boxes at some screen widths
1926346 - move to go 1.15 and registry.ci.openshift.org
1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM
1926465 - bootstrap kube-apiserver does not have --advertise-address set – was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn't have IPv6 endpoints
1926484 - API server exits non-zero on 2 SIGTERM signals
1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag
1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log
1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results
1926776 - "Template support" modal appears when select the RHEL6 common template
1926835 - [e2e][automation] prow gating use unsupported CDI version
1926843 - pipeline with finally tasks status is improper
1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade
1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment "lost" the resources
section.
1926903 - NTO may fail to disable stalld when relying on Tuned '[service]' plugin
1926931 - Inconsistent ovs-flow rule on one of the app node for egress node
1926943 - vsphere-problem-detector: Alerts in CI jobs
1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs
1927013 - Tables don't render properly at smaller screen widths
1927017 - CCO does not relinquish leadership when restarting for proxy CA change
1927042 - Empty static pod files on UPI deployments are confusing
1927047 - multiple external gateway pods will not work in ingress with IP fragmentation
1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64
1927075 - [e2e][automation] Fix pvc string in pvc.view
1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page
1927244 - UPI installation with Kuryr timing out on bootstrap stage
1927263 - kubelet service takes around 43 secs to start container when started from stopped state
1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver
1927310 - Performance: Console makes unnecessary requests for en-US messages on load
1927340 - Race condition in OperatorCondition reconcilation
1927366 - OVS configuration service unable to clone NetworkManager's connections in the overlay FS
1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady
1927393 - 4.7 still points to 4.6 catalog images
1927397 - p&f: add auto update for priority & fairness bootstrap configuration objects
1927423 - Happy "Not Found" and no visible error messages on error-list page when /silences 504s
1927465 - Homepage dashboard content not internationalized
1927678 - Reboot interface defaults to softPowerOff so fencing is too slow
1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev
1927797 - 'Pod(s)' should be included in the pod donut label when a horizontal pod autoscaler is enabled
1927882 - Can't create cluster role binding from UI when a project is selected
1927895 - global RuntimeConfig is overwritten with merge result
1927898 - i18n Admin Notifier
1927902 - i18n Cluster Utilization dashboard duration
1927903 - "CannotRetrieveUpdates" - critical error in openshift web console
1927925 - Manually misspelled as Manualy
1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array
1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart
1927944 - cluster version operator cycles terminating state waiting for leader election
1927993 - Documentation Links in OKD Web Console are not Working
1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode
1928045 - N+1 scaling Info message says "single zone" even if the nodes are spread across 2 or 0 zones
1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV
1928157 - 4.7 CNO claims to be done upgrading before it even starts
1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured
1928297 - HAProxy fails with 500 on some requests
1928473 - NetworkManager overlay FS not being created on None platform
1928512 - sap license management logs gatherer
1928537 - Cannot IPI with tang/tpm disk encryption
1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS
1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release
1928850 - Unable to pull images due to limited quota on Docker Hub
1928851 - manually creating NetNamespaces will break things and this is not obvious
1928867 - golden images - DV should not be created with WaitForFirstConsumer
1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1
1928875 - Update translations
1928893 - Memory Pressure Drop Down Info is stating "Disk" capacity is low instead of memory
1928931 - DNSRecord CRD is using deprecated v1beta1 API
1928937 - CVE-2021-23337 nodejs-lodash: command injection via template
1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions
1929052 - Add new Jenkins agent maven dir for 3.6
1929056 - kube-apiserver-availability.rules are failing evaluation
1929110 - LoadBalancer service check test fails during vsphere upgrade
1929136 - openshift isn't able to mount nfs manila shares to pods
1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner
1929243 - Namespace column missing in Nodes Node Details / pods tab
1929277 - Monitoring workloads using too high a priorityclass
1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1
1929314 - ovn-kubernetes endpoint slice controller doesn't run on CI jobs
1929359 - etcd-quorum-guard uses origin-cli [4.8]
1929577 - Edit Application action overwrites Deployment envFrom values on save
1929654 - Registry for Azure uses legacy V1 StorageAccount
1929693 - Pod stuck at "ContainerCreating" status
1929733 - oVirt CSI driver operator is constantly restarting
1929769 - Getting 404 after switching user perspective in another tab and reload Project details
1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow
1929824 - fix alerting on volume name check for vsphere
1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade
1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost
1930007 - filter dropdown item filter and resource list dropdown item filter doesn't support multi selection
1930015 - OS list is overlapped by buttons in template wizard
1930064 - Web console crashes during VM creation from template when no storage classes are defined
1930220 - Cinder CSI driver is not able to mount volumes under heavier load
1930240 - Generated clouds.yaml incomplete when provisioning network is disabled
1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console
1930268 - intel vfio devices are not expose as resources
1930356 - Darwin binary missing from mirror.openshift.com
1930393 - Gather info about unhealthy SAP pods
1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console
1930570 - Jenkins templates are displayed in Developer Catalog twice
1930620 - the logLevel field in containerruntimeconfig can't be set to "trace"
1930631 - Image local-storage-mustgather in the doc does not come from product registry
1930893 - Backport upstream patch 98956 for pod terminations
1931005 - Related objects page doesn't show the object when its name is empty
1931103 - remove periodic log within kubelet
1931115 - Azure cluster install fails with worker type workers Standard_D4_v2
1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups
1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS
1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy
1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start of the Keepalived container
1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails
1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses)
1931629 - Conversational Hub Fails due to ImagePullBackOff
1931637 - Kubeturbo Operator fails due to ImagePullBackOff
1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race.
1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint
1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods
1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently
1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff
1931949 - Red Hat Integration Camel-K Operator keeps stuck in Pending state
1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6
1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7
1932001 - Only one of multiple subscriptions to the same package is honored
1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown
1932105 - machine-config ClusterOperator claims level while control-plane still updating
1932133 - AWS EBS CSI Driver doesn’t support “csi.storage.k8s.io/fsTyps” parameter
1932135 - When “iopsPerGB” parameter is not set, event for AWS EBS CSI Driver provisioning is not clear
1932152 - When “iopsPerGB” parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear
1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors
1932182 - catalog operator causing CPU spikes and bad etcd performance
1932229 - Can’t find kubelet metrics for aws ebs csi volumes
1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered
1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the "allowedIframeHostnames" option can lead to bypass hostname whitelist for iframe element
1932324 - CRIO fails to create a Pod in sandbox stage - starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \"\n"
1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation
1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new "canary" route
1932453 - Update Japanese timestamp format
1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue
1932487 - [OKD] origin-branding manifest is missing cluster profile annotations
1932502 - Setting MTU for a bond interface using Kernel arguments is not working
1932618 - Alerts during a test run should fail the test job, but were not
1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be
1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy
1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made
1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network
1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM
1932805 - e2e: test OAuth API connections in the tests by that name
1932816 - No new local storage operator bundle image is built
1932834 - enforce the use of hashed access/authorize tokens
1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console
1933102 - Canary daemonset uses default node selector
1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal]
1933159 - multus DaemonSets should use maxUnavailable: 33%
1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10%
1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10%
1933179 - network-check-target DaemonSet should use maxUnavailable: 10%
1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10%
1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10%
1933263 - user manifest with nodeport services causes bootstrap to block
1933269 - Cluster unstable replacing an unhealthy etcd member
1933284 - Samples in CRD creation are ordered arbitarly
1933414 - Machines are created with unexpected name for Ports
1933599 - bump k8s.io/apiserver to 1.20.3
1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like ":"
1933664 - Getting Forbidden for image in a container template when creating a sample app
1933708 - Grafana is not displaying deployment config resources in dashboard Default /Kubernetes / Compute Resources / Namespace (Workloads)
1933711 - EgressDNS: Keep short lived records at most 30s
1933730 - [AI-UI-Wizard] Toggling "Use extra disks for local storage" checkbox highlights the "Next" button to move forward but grays out once clicked
1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively
1933772 - MCD Crash Loop Backoff
1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior
1933857 - Details page can throw an uncaught exception if kindObj prop is undefined
1933880 - Kuryr-Controller crashes when it's missing the status object
1934021 - High RAM usage on machine api termination node system oom
1934071 - etcd consuming high amount of memory and CPU after upgrade to 4.6.17
1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade
1934085 - Scheduling conformance tests failing in a single node cluster
1934107 - cluster-authentication-operator builds URL incorrectly for IPv6
1934112 - Add memory and uptime metadata to IO archive
1934113 - mcd panic when there's not enough free disk space
1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP
1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh
1934174 - rootfs too small when enabling NBDE
1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3
1934177 - knative-camel-operator CreateContainerError "container_linux.go:366: starting container process caused: chdir to cwd (\"/home/nonroot\") set in config.json failed: permission denied"
1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0
1934229 - List page text filter has input lag
1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions
1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady
1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods
1934556 - OCP-Metal images
1934557 - RHCOS boot image bump for LUKS fixes
1934643 - Need BFD failover capability on ECMP routes
1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10%
1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP)
1934905 - CoreDNS's "errors" plugin is not enabled for custom upstream resolvers
1935058 - Can’t finish install sts clusters on aws government region
1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login
1935155 - IGMP/MLD packets being dropped
1935157 - [e2e][automation] environment tests broken
1935165 - OCP 4.6 Build fails when filename contains an umlaut
1935176 - Missing an indication whether the deployed setup is SNO.
1935269 - Topology operator group shows child Jobs. Not shown in details view's resources.
1935419 - Failed to scale worker using virtualmedia on Dell R640
1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting
1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7
1935541 - console operator panics in DefaultDeployment with nil cm
1935582 - prometheus liveness probes cause issues while replaying WAL
1935604 - high CPU usage fails ingress controller
1935667 - pipelinerun status icon rendering issue
1935706 - test: Detect when the master pool is still updating after upgrade
1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data]
1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text
1935909 - New CSV using ServiceAccount named "default" stuck in Pending during upgrade
1936022 - DNS operator performs spurious updates in response to API's defaulting of daemonset's terminationGracePeriod and service's clusterIPs
1936030 - Ingress operator performs spurious updates in response to API's defaulting of NodePort service's clusterIPs field
1936223 - The IPI installer has a typo. It is missing the word "the" in "the Engine".
1936336 - Updating multus-cni builder & base images to be consistent with ART 4.8 (closed)
1936342 - kuryr-controller restarting after 3 days cluster running - pools without members
1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623
1936488 - [sig-instrumentation][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error
1936515 - sdn-controller is missing some health checks
1936534 - When creating a worker with a used mac-address stuck on registering
1936585 - configure alerts if the catalogsources are missing
1936620 - OLM checkbox descriptor renders switch instead of checkbox
1936721 - network-metrics-deamon not associated with a priorityClassName
1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear
1936785 - Configmap gatherer doesn't include namespace name (in the archive path) in case of a configmap with binary data
1936788 - RBD RWX PVC creation with Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection
1936798 - Authentication log gatherer shouldn't scan all the pod logs in the openshift-authentication namespace
1936801 - Support ServiceBinding 0.5.0+
1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow
1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies
1936859 - ovirt 4.4 -> 4.5 upgrade jobs are permafailing
1936867 - Periodic vsphere IPI install is broken - missing pip
1936871 - [Cinder CSI] Topology aware provisioning doesn't work when Nova and Cinder AZs are different
1936904 - Wrong output YAML when syncing groups without --confirm
1936983 - Topology view - vm details screen isntt stop loading
1937005 - when kuryr quotas are unlimited, we should not sent alerts
1937018 - FilterToolbar component does not handle 'null' value for 'rowFilters' prop
1937020 - Release new from image stream chooses incorrect ID based on status
1937077 - Blank White page on Topology
1937102 - Pod Containers Page Not Translated
1937122 - CAPBM changes to support flexible reboot modes
1937145 - [Local storage] PV provisioned by localvolumeset stays in "Released" status after the pod/pvc deleted
1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes
1937244 - [Local Storage] The model name of aws EBS doesn't be extracted well
1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes
1937452 - cluster-network-operator CI linting fails in master branch
1937459 - Wrong Subnet retrieved for Service without Selector
1937460 - [CI] Network quota pre-flight checks are failing the installation
1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster
1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation
1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint
1937535 - Not all image pulls within OpenShift builds retry
1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes
1937627 - Bump DEFAULT_DOC_URL for 4.8
1937628 - Bump upgrade channels for 4.8
1937658 - Description for storage class encryption during storagecluster creation needs to be updated
1937666 - Mouseover on headline
1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage
1937693 - ironic image "/" cluttered with files
1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController
1937717 - If browser default font size is 20, the layout of template screen breaks
1937722 - OCP 4.8 vuln due to BZ 1936445
1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator
1937941 - [RFE]fix wording for favorite templates
1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations
1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go
1938321 - Cannot view PackageManifest objects in YAML on 'Home > Search' page nor 'CatalogSource details > Operators tab'
1938465 - thanos-querier should set a CPU request on the thanos-query container
1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container
1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them
1938468 - kube-scheduler-operator has a container without a CPU request
1938492 - Marketplace extract container does not request CPU or memory
1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not
1938636 - Can't set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller
1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph
1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10%
1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances
1938949 - [VPA] Updater failed to trigger evictions due to "vpa-admission-controller" not found
1939054 - machine healthcheck kills aws spot instance before generated
1939060 - CNO: nodes and masters are upgrading simultaneously
1939069 - Add source to vm template silently failed when no storage class is defined in the cluster
1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string
1939168 - Builds failing for OCP 3.11 since PR#25 was merged
1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz
1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez
1939232 - CI tests using openshift/hello-world broken by Ruby Version Update
1939270 - fix co upgradeableFalse status and reason
1939294 - OLM may not delete pods with grace period zero (force delete)
1939412 - missed labels for thanos-ruler pods
1939485 - CVE-2021-20291 containers/storage: DoS via malicious image
1939547 - Include container="POD" in resource queries
1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0
1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated
1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs
1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent
1939661 - support new AWS region ap-northeast-3
1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution
1939731 - Image registry operator reports unavailable during normal serial run
1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters
1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase
1939752 - ovnkube-master sbdb container does not set requests on cpu or memory
1939753 - Delete HCO is stucking if there is still VM in the cluster
1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page
1939853 - [DOC] Creating manifests API should not allow folder in the "file_name"
1939865 - GCP PD CSI driver does not have CSIDriver instance
1939869 - [e2e][automation] Add annotations to datavolume for HPP
1939873 - Unlimited number of characters accepted for base domain name
1939943 - cluster-kube-apiserver-operator check-endpoints
observed a panic: runtime error: invalid memory address or nil pointer dereference
1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration
1940057 - Openshift builds should use a wach instead of polling when checking for pod status
1940142 - 4.6->4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying
1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network
1940206 - Selector and VolumeTableRows not i18ned
1940207 - 4.7->4.6 rollbacks stuck on prometheusrules admission webhook "no route to host"
1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads)
1940318 - No data under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod'
1940322 - Split of dashbard is wrong, many Network parts
1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn't have flavors needed for compute machines
1940361 - [e2e][automation] Fix vm action tests with storageclass HPP
1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters
1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys
1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages
1940499 - hybrid-overlay not logging properly before exiting due to an error
1940518 - Components in bare metal components lack resource requests
1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header
1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned
1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info
1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list
1940876 - Components in ovirt components lack resource requests
1940889 - Installation failures in OpenStack release jobs
1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io
1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP
1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster
1940950 - vsphere: client/bootstrap CSR double create
1940972 - vsphere: [4.6] CSR approval delayed for unknown reason
1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16.
1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy
1941342 - Add kata-osbuilder-generate.service
as part of the default presets
1941456 - Multiple pods stuck in ContainerCreating status with the message "failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user" being seen in the journal log
1941526 - controller-manager-operator: Observed a panic: nil pointer dereference
1941592 - HAProxyDown not Firing
1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp
1941625 - Developer -> Topology - i18n misses
1941635 - Developer -> Monitoring - i18n misses
1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid
1941645 - Developer -> Builds - i18n misses
1941655 - Developer -> Pipelines - i18n misses
1941667 - Developer -> Project - i18n misses
1941669 - Developer -> ConfigMaps - i18n misses
1941759 - Errored pre-flight checks should not prevent install
1941798 - Some details pages don't have internationalized ResourceKind labels
1941801 - Many filter toolbar dropdowns haven't been internationalized
1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view
1941859 - [assisted operator] assisted pod deploy first time in error state
1941901 - Toleration merge logic does not account for multiple entries with the same key
1941915 - No validation against template name in boot source customization
1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description
1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8
1941990 - Pipeline metrics endpoint changed in osp-1.4
1941995 - fix backwards incompatible trigger api changes in osp1.4
1942086 - Administrator -> Home - i18n misses
1942117 - Administrator -> Workloads - i18n misses
1942125 - Administrator -> Serverless - i18n misses
1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup)
1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail
1942271 - Insights operator doesn't gather pod information from openshift-cluster-version
1942375 - CRI-O failing with error "reserving ctr name"
1942395 - The status is always "Updating" on dc detail page after deployment has failed.
1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied
1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate
1942536 - Corrupted image preventing containers from starting
1942548 - Administrator -> Networking - i18n misses
1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic
1942555 - Network policies in ovn-kubernetes don't support external traffic from router when the endpoint publishing strategy is HostNetwork
1942557 - Query is reporting "no datapoint" when label cluster="" is set but work when the label is removed or when running directly in Prometheus
1942608 - crictl cannot list the images with an error: error locating item named "manifest" for image with ID
1942614 - Administrator -> Storage - i18n misses
1942641 - Administrator -> Builds - i18n misses
1942673 - Administrator -> Pipelines - i18n misses
1942694 - Resource names with a colon do not display property in the browser window title
1942715 - Administrator -> User Management - i18n misses
1942716 - Quay Container Security operator has Medium <-> Low colors reversed
1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8]
1942736 - Administrator -> Administration - i18n misses
1942749 - Install Operator form should use info icon for popovers
1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls
1942839 - Windows VMs fail to start on air-gapped environments
1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set
1942858 - [RFE]Confusing detach volume UX
1942883 - AWS EBS CSI driver does not support partitions
1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy
1942935 - must-gather improvements
1943145 - vsphere: client/bootstrap CSR double create
1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2)
1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()
1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked
1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest
1943238 - The conditions table does not occupy 100% of the width.
1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane
1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB.
1943315 - avoid workload disruption for ICSP changes
1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes
1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest
1943356 - Dynamic plugins surfaced in the UI should be referred to as "Console plugins"
1943539 - crio-wipe is failing to start "Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container"
1943543 - DeploymentConfig Rollback doesn't reset params correctly
1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement
1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds
1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage
1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn
1943649 - don't use hello-openshift for network-check-target
1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress
1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions
1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB
1943845 - Router pods should have startup probes configured
1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors
1944160 - CNO: nbctl daemon should log reconnection info
1944180 - OVN-Kube Master does not release election lock on shutdown
1944246 - Ironic fails to inspect and move node to "manageable' but get bmh remains in "inspecting"
1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region
1944509 - Translatable texts without context in ssh expose component
1944581 - oc project not works with cluster proxy
1944587 - VPA could not take actions based on the recommendation when min-replicas=1
1944590 - The field name "VolumeSnapshotContent" is wrong on VolumeSnapshotContent detail page
1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI
1944631 - openshif authenticator should not accept non-hashed tokens
1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with ".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock"
1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures
1944674 - Project field become to "All projects" and disabled in "Review and create virtual machine" step in devconsole
1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods
1944761 - field level help instances do not use common util component Operators
1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out
1945910 - [aws] support byo iam roles for instances
1945948 - SNO: pods can't reach ingress when the ingress uses a different IPv6.
1946079 - Virtual master is not getting an IP address
1946097 - [oVirt] oVirt credentials secret contains unnecessary "ovirt_cafile"
1946119 - panic parsing install-config
1946243 - No relevant error when pg limit is reached in block pools page
1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image
1946320 - Incorrect error message in Deployment Attach Storage Page
1946449 - [e2e][automation] Fix cloud-init tests as UI changed
1946458 - Edit Application action overwrites Deployment envFrom values on save
1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI.
1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default
1946497 - local-storage-diskmaker pod logs "DeviceSymlinkExists" and "not symlinking, could not get lock: download it
link should save pod log in bootstrap.ign was not found
1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile
1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile
1948711 - thanos querier and prometheus-adapter should have 2 replicas
1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile
1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile
1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector
1948719 - Machine API components should use 1.21 dependencies
1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile
1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed
1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing
1948771 - ~50% of GCP upgrade jobs in 4.8 failing with "AggregatedAPIDown" alert on packages.coreos.com
1948782 - Stale references to the single-node-production-edge cluster profile
1948787 - secret.StringData shouldn't be used for reads
1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer
1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page
1948919 - Need minor update in message on channel modal
1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region
1948926 - Memory Usage of Dashboard 'Kubernetes / Compute Resources / Pod' contain wrong CPU query
1948936 - [e2e][automation][prow] Prow script point to deleted resource
1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer
1948953 - Uninitialized cloud provider error when provisioning a cinder volume
1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages
1948966 - Add the ability to run a gather done by IO via a Kubernetes Job
1948981 - Align dependencies and libraries with latest ironic code
1948998 - style fixes by GoLand and golangci-lint
1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way.
1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV
1949022 - Openshift 4 has a zombie problem
1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil
1949041 - vsphere: wrong image names in bundle
1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests (on OpenStack)
1949050 - Bump k8s to latest 1.21
1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv in the case of invalid NMStateConfig
1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
1949075 - Extend openshift/api for Add card customization
1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues
1949096 - Restore private git clone tests
1949099 - network-check-target code cleanup
1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol
1949145 - Move openshift-user-critical priority class to CCO
1949155 - Console doesn't correctly check for favorited or last namespace on load if project picker used
1949180 - Pipelines plugin model kinds aren't picked up by parser
1949202 - sriov-network-operator not available from operatorhub on ppc64le
1949218 - ccoctl not included in container image
1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs
1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors
1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate
1949306 - need a way to see top API accessors
1949313 - Rename vmware-vsphere- images to vsphere- images before 4.8 ships
1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring
1949347 - apiserver-watcher support for dual-stack
1949357 - manila-csi-controller pod not running due to secret lack(in another ns)
1949361 - CoreDNS resolution failure for external hostnames with "A: dns: overflow unpacking uint16"
1949364 - Mention scheduling profiles in scheduler operator repository
1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error
1949384 - Edit Default Pull Secret modal - i18n misses
1949387 - Fix the typo in auto node sizing script
1949404 - label selector on pvc creation page - i18n misses
1949410 - The referred role doesn't exist if create rolebinding from rolebinding tab of role page
1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses
1949413 - Automatic boot order setting is done incorrectly when using by-path style device names
1949418 - Controller factory workers should always restart on panic()
1949419 - oauth-apiserver logs "[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)"
1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin
1949435 - ingressclass controller doesn't recreate the openshift-default ingressclass after deleting it
1949480 - Listeners timeout are constantly being updated
1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages
1949509 - Kuryr should manage API LB instead of CNO
1949514 - URL is not visible for routes at narrow screen widths
1949554 - Metrics of vSphere CSI driver sidecars are not collected
1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error "egress bandwidth restriction -1 is not equals"
1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing
1949591 - Alert does not catch removed api usage during end-to-end tests.
1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse
1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command 'du'
1949626 - machine-api fails to create AWS client in new regions
1949661 - Kubelet Workloads Management changes for OCPNODE-529
1949664 - Spurious keepalived liveness probe failures
1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot
1949677 - multus is the first pod on a new node and the last to go ready
1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace
1949721 - Pick 99237: Use the audit ID of a request for better correlation
1949741 - Bump golang version of cluster-machine-approver
1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64
1949810 - OKD 4.7 unable to access Project Topology View
1949818 - Add e2e test to perform MCO operation Single Node OpenShift
1949820 - Unable to use oc adm top is
shortcut when asking for imagestreams
1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand
1949866 - The ccoctl fails to create authentication file when running the command ccoctl aws create-identity-provider
with --output-dir
parameter
1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn't work
1949882 - service-idler build error
1949898 - Backport RP#848 to OCP 4.8
1949907 - Gather summary of PodNetworkConnectivityChecks
1949923 - some defined rootVolumes zones not used on installation
1949928 - Samples Operator updates break CI tests
1949935 - Fix incorrect access review check on start pipeline kebab action
1949956 - kaso: add minreadyseconds to ensure we don't have an LB outage on kas
1949967 - Update Kube dependencies in MCO to 1.21
1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable
1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]
1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name
1949991 - openshift-marketplace pods are crashlooping
1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image
1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy
1950047 - CSV deployment template custom annotations are not propagated to deployments
1950112 - SNO: machine-config pool is degraded: error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791
1950113 - in-cluster operators need an API for additional AWS tags
1950133 - MCO creates empty conditions on the kubeletconfig object
1950159 - Downstream ovn-kubernetes repo should have no linter errors
1950175 - Update Jenkins and agent base image to Go 1.16
1950196 - ssh Key is added even with 'Expose SSH access to this virtual machine' unchecked
1950210 - VPA CRDs use deprecated API version
1950219 - KnativeServing is not shown in list on global config page
1950232 - [Descheduler] - The minKubeVersion should be 1.21
1950236 - Update OKD imagestreams to prefer centos7 images
1950270 - should use "kubernetes.io/os" in the dns/ingresscontroller node selector description when executing oc explain command
1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers
1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network
1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs
1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build perm failing
1950409 - Descheduler operator code and docs still reference v1beta1
1950417 - The Marketplace Operator is building with EOL k8s versions
1950430 - CVO serves metrics over HTTP, despite a lack of consumers
1950460 - RFE: Change Request Size Input to Number Spinner Input
1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap
1950532 - Include "update" when referring to operator approval and channel
1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift)
1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff
1950653 - BuildConfig ignores Args
1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node
1950908 - kube_pod_labels metric does not contain k8s labels
1950912 - [e2e][automation] add devconsole tests
1950916 - [RFE]console page show error when vm is poused
1950934 - Unnecessary rollouts can happen due to unsorted endpoints
1950935 - Updating cluster-network-operator builder & base images to be consistent with ART
1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller
1951007 - ovn master pod crashed
1951029 - Drainer panics on missing context for node patch
1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts
1951042 - Panics every few minutes in kubelet logs post-rebase
1951043 - Start Pipeline Modal Parameters should accept empty string defaults
1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests
1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud
1951084 - avoid benign "Path \"/run/secrets/etc-pki-entitlement\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping" messages
1951158 - Egress Router CRD missing Addresses entry
1951169 - Improve API Explorer discoverability from the Console
1951174 - re-pin libvirt to 6.0.0
1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd's size limit
1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI
1951212 - User/Group details shows unrelated subjects in role bindings tab
1951214 - VM list page crashes when the volume type is sysprep
1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions
1951387 - opm index add doesn't respect deprecated bundles
1951412 - Configmap gatherer can fail incorrectly
1951456 - Docs and linting fixes
1951486 - Replace "kubevirt_vmi_network_traffic_bytes_total" with new metrics names
1951505 - Remove deprecated techPreviewUserWorkload field from CMO's configmap
1951558 - Backport Upstream 101093 for Startup Probe Fix
1951585 - enterprise-pod fails to build
1951636 - assisted service operator use default serviceaccount in operator bundle
1951637 - don't rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes
1951639 - Bootstrap API server unclean shutdown causes reconcile delay
1951646 - Unexpected memory climb while container not in use
1951652 - Add retries to opm index add
1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit
1951671 - Excessive writes to ironic Nodes
1951705 - kube-apiserver needs alerts on CPU utlization
1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can't find created instance
1951853 - dnses.operator.openshift.io resource's spec.nodePlacement.tolerations godoc incorrectly describes default behavior
1951858 - unexpected text '0' on filter toolbar on RoleBinding tab
1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator
1951870 - sriov network resources injector: user defined injection removed existing pod annotations
1951891 - [migration] cannot change ClusterNetwork CIDR during migration
1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost
1952001 - Delegated authentication: reduce the number of watch requests
1952032 - malformatted assets in CMO
1952045 - Mirror nfs-server image used in jenkins-e2e
1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert
1952079 - rebase openshift/sdn to kube 1.21
1952111 - Optimize importing from @patternfly/react-tokens
1952174 - DNS operator claims to be done upgrading before it even starts
1952179 - OpenStack Provider Ports UI Underscore Variables
1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID "SomeLongID" already exists, but uses a different top layer: that ID
1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods
1952214 - Console Devfile Import Dev Preview broken
1952238 - Catalog pods don't report termination logs to catalog-operator
1952262 - Need support external gateway via hybrid overlay
1952266 - etcd operator bumps status.version[name=operator] before operands update
1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots
1952282 - CSR approver races with nodelink controller and does not requeue
1952310 - VM cannot start up if the ssh key is added by another template
1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport
1952333 - openshift/kubernetes vulnerable to CVE-2021-3121
1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations
1952367 - No VM status on overview page when VM is pending
1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc
1952372 - VM stop action should not be there if the VM is not running
1952405 - console-operator is not reporting correct Available status
1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped
1952460 - In k8s 1.21 bump '[sig-network] Firewall rule control plane should not expose well-known ports' test is disabled
1952473 - Monitor pod placement during upgrades
1952487 - Template filter does not work properly
1952495 - “Create” button on the Templates page is confuse
1952527 - [Multus] multi-networkpolicy does wrong filtering
1952545 - Selection issue when inserting YAML snippets
1952585 - Operator links for 'repository' and 'container image' should be clickable in OperatorHub
1952604 - Incorrect port in external loadbalancer config
1952610 - [aws] image-registry panics when the cluster is installed in a new region
1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances
1952618 - 4.7.4->4.7.8 Upgrade Caused OpenShift-Apiserver Outage
1952625 - Fix translator-reported text issues
1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8
1952635 - Web console displays a blank page- white space instead of cluster information
1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory)
1952666 - Implement Enhancement 741 for Kubelet
1952667 - Update Readme for cluster-baremetal-operator with details about the operator
1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client
1952728 - It was not clear for users why Snapshot feature was not available
1952730 - “Customize virtual machine” and the “Advanced” feature are confusing in wizard
1952732 - Users did not understand the boot source labels
1952741 - Monitoring DB: after set Time Range as Custom time range, no data display
1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled
1952759 - [RFE]It was not immediately clear what the Star icon meant
1952795 - cloud-network-config-controller CRD does not specify correct plural name
1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows
1952820 - [LSO] Delete localvolume pv is failed
1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud
1952891 - Upgrade failed due to cinder csi driver not deployed
1952904 - Linting issues in gather/clusterconfig package
1952906 - Unit tests for configobserver.go
1952931 - CI does not check leftover PVs
1952958 - Runtime error loading console in Safari 13
1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool
1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform
1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU
1953077 - Handling GCP's: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource
1953102 - kubelet CPU use during an e2e run increased 25% after rebase
1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9
1953169 - endpoint slice controller doesn't handle services target port correctly
1953257 - Multiple EgressIPs per node for one namespace when "oc get hostsubnet"
1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it
1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly
1953418 - [e2e][automation] Fix vm wizard validate tests
1953518 - thanos-ruler pods failed to start up for "cannot unmarshal DNS message"
1953530 - Fix openshift/sdn unit test flake
1953539 - kube-storage-version-migrator: priorityClassName not set
1953543 - (release-4.8) Add missing sample archive data
1953551 - build failure: unexpected trampoline for shared or dynamic linking
1953555 - GlusterFS tests fail on ipv6 clusters
1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology
1953670 - ironic container image build failing because esp partition size is too small
1953680 - ipBlock ignoring all other cidr's apart from the last one specified
1953691 - Remove unused mock
1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console
1953726 - Fix issues related to loading dynamic plugins
1953729 - e2e unidling test is flaking heavily on SNO jobs
1953795 - Ironic can't virtual media attach ISOs sourced from ingress routes
1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS
1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster
1953810 - Allow use of storage policy in VMC environments
1953830 - The oc-compliance build does not available for OCP4.8
1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation
1953977 - [4.8] packageserver pods restart many times on the SNO cluster
1953979 - Ironic caching virtualmedia images results in disk space limitations
1954003 - Alerts shouldn't report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown
1954025 - Disk errors while scaling up a node with multipathing enabled
1954087 - Unit tests for kube-scheduler-operator
1954095 - Apply user defined tags in AWS Internal Registry
1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns
1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots
1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js
1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22
1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22
1954248 - Disable Alertmanager Protractor e2e tests
1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container
1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: "" does not work on a upgraded cluster
1954421 - Get 'Application is not available' when access Prometheus UI
1954459 - Error: Gateway Time-out display on Alerting console
1954460 - UI, The status of "Used Capacity Breakdown [Pods]" is "Not available"
1954509 - FC volume is marked as unmounted after failed reconstruction
1954540 - Lack translation for local language on pages under storage menu
1954544 - authn operator: endpoints controller should use the context it creates
1954554 - Add e2e tests for auto node sizing
1954566 - Cannot update a component (UtilizationCard
) error when switching perspectives manually
1954597 - Default image for GCP does not support ignition V3
1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator
1954634 - apirequestcounts does not honor max users
1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0
1954640 - Support of gatherers with different periods
1954671 - disable volume expansion support in vsphere csi driver storage class
1954687 - localvolumediscovery and localvolumset e2es are disabled
1954688 - LSO has missing examples for localvolumesets
1954696 - [API-1009] apirequestcounts should indicate useragent
1954715 - Imagestream imports become very slow when doing many in parallel
1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace
1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert
1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert
1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure
1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert
1954783 - [aws] support byo private hosted zone
1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage
1954830 - verify-client-go job is failing for release-4.7 branch
1954865 - Add necessary priority class to pod-identity-webhook deployment
1954866 - Add necessary priority class to downloads
1954870 - Add necessary priority class to network components
1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by openstack.
1954891 - Add necessary priority class to pruner
1954892 - Add necessary priority class to ingress-canary
1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources
1954937 - [API-1009] oc get apirequestcount
shows blank for column REQUESTSINCURRENTHOUR
1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services
1954972 - TechPreviewNoUpgrade featureset can be undone
1954973 - "read /proc/pressure/cpu: operation not supported" in node-exporter logs
1954994 - should update to 2.26.0 for prometheus resources label
1955051 - metrics "kube_node_status_capacity_cpu_cores" does not exist
1955089 - Support [sig-cli] oc observe works as expected test for IPv6
1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display
1955102 - Add vsphere_node_hw_version_total metric to the collected metrics
1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM
1955196 - linuxptp-daemon crash on 4.8
1955226 - operator updates apirequestcount CRD over and over
1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing
1955256 - stop collecting API that no longer exists
1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts
1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains "google"
1955414 - 4.8 -> 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator
1955445 - Drop crio image metrics with high cardinality
1955457 - Drop container_memory_failures_total metric because of high cardinality
1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter
1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0
1955478 - Drop high-cardinality metrics from kube-state-metrics which aren't used
1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation
1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range
1955554 - MAO does not react to events triggered from Validating Webhook Configurations
1955589 - thanos-querier should have a PodDisruptionBudget in HA topology
1955595 - Add DevPreviewLongLifecycle Descheduler profile
1955596 - Pods stuck in creation phase on realtime kernel SNO
1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing
1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status ['installing', 'error']
1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta
1955749 - OCP branded templates need to be translated
1955761 - packageserver clusteroperator does not set reason or message for Available condition
1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces
1955803 - OperatorHub - console accepts any value for "Infrastructure features" annotation
1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables
1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable
1955862 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated
1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct
1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-> Removed-> Managed in configs.imageregistry with high ratio
1955969 - Workers cannot be deployed attached to multiple networks.
1956079 - Installer gather doesn't collect any networking information
1956208 - Installer should validate root volume type
1956220 - Set htt proxy system properties as expected by kubernetes-client
1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet
1956334 - Event Listener Details page does not show Triggers section
1956353 - test: analyze job consistently fails
1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate
1956405 - Bump k8s dependencies in cluster resource override admission operator
1956411 - Apply custom tags to AWS EBS volumes
1956480 - [4.8] Bootimage bump tracker
1956606 - probes FlowSchema manifest not included in any cluster profile
1956607 - Multiple manifests lack cluster profile annotations
1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup
1956610 - manage-helm-repos manifest lacks cluster profile annotations
1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string
1956650 - The container disk URL is empty for Windows guest tools
1956768 - aws-ebs-csi-driver-controller-metrics TargetDown
1956826 - buildArgs does not work when the value is taken from a secret
1956895 - Fix chatty kubelet log message
1956898 - fix log files being overwritten on container state loss
1956920 - can't open terminal for pods that have more than one container running
1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false
1956978 - Installer gather doesn't include pod names in filename
1957039 - Physical VIP for pod -> Svc -> Host is incorrectly set to an IP of 169.254.169.2 for Local GW
1957041 - Update CI e2echart with more node info
1957127 - Delegated authentication: reduce the number of watch requests
1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the "tests" image
1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes
1957149 - CI: "Managed cluster should start all core operators" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): missing dynamicClient
1957179 - Incorrect VERSION in node_exporter
1957190 - CI jobs failing due too many watch requests (prometheus-operator)
1957198 - Misspelled console-operator condition
1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap
1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2
1957261 - update godoc for new build status image change trigger fields
1957295 - Apply priority classes conventions as test to openshift/origin repo
1957315 - kuryr-controller doesn't indicate being out of quota
1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly
1957374 - mcddrainerr doesn't list specific pod
1957386 - Config serve and validate command should be under alpha
1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions
1957502 - Infrequent panic in kube-apiserver in aws-serial job
1957561 - lack of pseudolocalization for some text on Cluster Setting page
1957584 - Routes are not getting created when using hostname without FQDN standard
1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone
1957645 - Event "Updated PrometheusRule.monitoring.coreos.com/v1 because it changed" is frequently looped with weird empty {} changes
1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP's
1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out
1957748 - Ptp operator pod should have CPU and memory requests set but not limits
1957756 - Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement"
1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1957775 - CVO creating cloud-controller-manager too early causing upgrade failures
1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error
1957822 - Update apiserver tlsSecurityProfile description to include Custom profile
1957832 - CMO end-to-end tests work only on AWS
1957856 - 'resource name may not be empty' is shown in CI testing
1957869 - baremetal IPI power_interface for irmc is inconsistent
1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects
1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer
1957893 - ClusterDeployment / Agent conditions show "ClusterAlreadyInstalling" during each spoke install
1957895 - Cypress helper projectDropdown.shouldContain is not an assertion
1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator's version reads
1957926 - "Add Capacity" should allow to add n3 (or n4) local devices at once
1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state
1957967 - Possible test flake in listPage Cypress view
1957972 - Leftover templates from mdns
1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7
1957982 - Deployment Actions clickable for view-only projects
1957991 - ClusterOperatorDegraded can fire during installation
1958015 - "config-reloader-cpu" and "config-reloader-memory" flags have been deprecated for prometheus-operator
1958080 - Missing i18n for login, error and selectprovider pages
1958094 - Audit log files are corrupted sometimes
1958097 - don't show "old, insecure token format" if the token does not actually exist
1958114 - Ignore staged vendor files in pre-commit script
1958126 - [OVN]Egressip doesn't take effect
1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs
1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names
1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs
1958285 - Deployment considered unhealthy despite being available and at latest generation
1958296 - OLM must explicitly alert on deprecated APIs in use
1958329 - pick 97428: add more context to log after a request times out
1958367 - Build metrics do not aggregate totals by build strategy
1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton
1958405 - etcd: current health checks and reporting are not adequate to ensure availability
1958406 - Twistlock flags mode of /var/run/crio/crio.sock
1958420 - openshift-install 4.7.10 fails with segmentation error
1958424 - aws: support more auth options in manual mode
1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View
1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse
1958643 - All pods creation stuck due to SR-IOV webhook timeout
1958679 - Compression on pool can't be disabled via UI
1958753 - VMI nic tab is not loadable
1958759 - Pulling Insights report is missing retry logic
1958811 - VM creation fails on API version mismatch
1958812 - Cluster upgrade halts as machine-config-daemon fails to parse rpm-ostree status
during cluster upgrades
1958861 - [CCO] pod-identity-webhook certificate request failed
1958868 - ssh copy is missing when vm is running
1958884 - Confusing error message when volume AZ not found
1958913 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff
1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs
1958958 - [SCALE] segfault with ovnkube adding to address set
1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes
1959041 - LSO Cluster UI,"Troubleshoot" link does not exist after scale down osd pod
1959058 - ovn-kubernetes has lock contention on the LSP cache
1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change
1959177 - Descheduler dev manifests are missing permissions
1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload
1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates
1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring
1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check
1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system
1959406 - Difficult to debug performance on ovn-k without pprof enabled
1959471 - Kube sysctl conformance tests are disabled, meaning we can't submit conformance results
1959479 - machines doesn't support dual-stack loadbalancers on Azure
1959513 - Cluster-kube-apiserver does not use library-go for audit pkg
1959519 - Operand details page only renders one status donut no matter how many 'podStatuses' descriptors are used
1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console
1959564 - Test verify /run filesystem contents failing
1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot
1959650 - Gather SDI-related MachineConfigs
1959658 - showing a lot "constructing many client instances from the same exec auth config"
1959696 - Deprecate 'ConsoleConfigRoute' struct in console-operator config
1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO
1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode
1959711 - Egressnetworkpolicy doesn't work when configure the EgressIP
1959786 - [dualstack]EgressIP doesn't work on dualstack cluster for IPv6
1959916 - Console not works well against a proxy in front of openshift clusters
1959920 - UEFISecureBoot set not on the right master node
1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: []
1960035 - iptables is missing from ose-keepalived-ipfailover image
1960059 - Remove "Grafana UI" link from Console Monitoring > Dashboards page
1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions
1960129 - [e2e][automation] add smoke tests about VM pages and actions
1960134 - some origin images are not public
1960171 - Enable SNO checks for image-registry
1960176 - CCO should recreate a user for the component when it was removed from the cloud providers
1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled
1960255 - fixed obfuscation permissions
1960257 - breaking changes in pr template
1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost
1960323 - Address issues raised by coverity security scan
1960324 - manifests: extra "spec.version" in console quickstarts makes CVO hotloop
1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop
1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop
1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop
1960339 - manifests: unset "preemptionPolicy" makes CVO hotloop
1960531 - Items under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' keep added for every access
1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana
1960546 - Add virt_platform metric to the collected metrics
1960554 - Remove rbacv1beta1 handling code
1960612 - Node disk info in overview/details does not account for second drive where /var is located
1960619 - Image registry integration tests use old-style OAuth tokens
1960683 - GlobalConfigPage is constantly requesting resources
1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces
1960716 - Missing details for debugging
1960732 - Outdated manifests directory in CSI driver operator repositories
1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master
1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be "the newest"
1960767 - /metrics endpoint of the Grafana UI is accessible without authentication
1960780 - CI: failed to create PDB "service-test" the server could not find the requested resource
1961064 - Documentation link to network policies is outdated
1961067 - Improve log gathering logic
1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs
1961091 - Gather MachineHealthCheck definitions
1961120 - CSI driver operators fail when upgrading a cluster
1961173 - recreate existing static pod manifests instead of updating
1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing
1961314 - Race condition in operator-registry pull retry unit tests
1961320 - CatalogSource does not emit any metrics to indicate if it's ready or not
1961336 - Devfile sample for BuildConfig is not defined
1961356 - Update single quotes to double quotes in string
1961363 - Minor string update for " No Storage classes found in cluster, adding source is disabled."
1961393 - DetailsPage does not work with group~version~kind
1961452 - Remove "Alertmanager UI" link from Console Monitoring > Alerting page
1961466 - Some dropdown placeholder text on route creation page is not translated
1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true
1961506 - NodePorts do not work on RHEL 7.9 workers (was "4.7 -> 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers")
1961536 - clusterdeployment without pull secret is crashing assisted service pod
1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop
1961545 - Fixing Documentation Generation
1961550 - HAproxy pod logs showing error "another server named 'pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080' was already defined at line 326, please use distinct names"
1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig
1961561 - The encryption controllers send lots of request to an API server
1961582 - Build failure on s390x
1961644 - NodeAuthenticator tests are failing in IPv6
1961656 - driver-toolkit missing some release metadata
1961675 - Kebab menu of taskrun contains Edit options which should not be present
1961701 - Enhance gathering of events
1961717 - Update runtime dependencies to Wallaby builds for bugfixes
1961829 - Quick starts prereqs not shown when description is long
1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy
1961878 - Add Sprint 199 translations
1961897 - Remove history listener before console UI is unmounted
1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes
1962062 - Monitoring dashboards should support default values of "All"
1962074 - SNO:the pod get stuck in CreateContainerError and prompt "failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable" after adding a performanceprofile
1962095 - Replace gather-job image without FQDN
1962153 - VolumeSnapshot routes are ambiguous, too generic
1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime
1962219 - NTO relies on unreliable leader-for-life implementation.
1962256 - use RHEL8 as the vm-example
1962261 - Monitoring components requesting more memory than they use
1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster
1962347 - Cluster does not exist logs after successful installation
1962392 - After upgrade from 4.5.16 to 4.6.17, customer's application is seeing re-transmits
1962415 - duplicate zone information for in-tree PV after enabling migration
1962429 - Cannot create windows vm because kubemacpool.io denied the request
1962525 - [Migration] SDN migration stuck on MCO on RHV cluster
1962569 - NetworkPolicy details page should also show Egress rules
1962592 - Worker nodes restarting during OS installation
1962602 - Cloud credential operator scrolls info "unable to provide upcoming..." on unsupported platform
1962630 - NTO: Ship the current upstream TuneD
1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root
1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace
1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint
1962740 - Add documentation to Egress Router
1962850 - [4.8] Bootimage bump tracker
1962882 - Version pod does not set priorityClassName
1962905 - Ramdisk ISO source defaulting to "http" breaks deployment on a good amount of BMCs
1963068 - ironic container should not specify the entrypoint
1963079 - KCM/KS: ability to enforce localhost communication with the API server.
1963154 - Current BMAC reconcile flow skips Ironic's deprovision step
1963159 - Add Sprint 200 translations
1963204 - Update to 8.4 IPA images
1963205 - Installer is using old redirector
1963208 - Translation typos/inconsistencies for Sprint 200 files
1963209 - Some strings in public.json have errors
1963211 - Fix grammar issue in kubevirt-plugin.json string
1963213 - Memsource download script running into API error
1963219 - ImageStreamTags not internationalized
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
1963267 - Warning: Invalid DOM property classname
. Did you mean className
? console warnings in volumes table
1963502 - create template from is not descriptive
1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too
1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault
1963848 - Use OS-shipped stalld vs. the NTO-shipped one.
1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies
1963871 - cluster-etcd-operator:[build] upgrade to go 1.16
1963896 - The VM disks table does not show easy links to PVCs
1963912 - "[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}" failures on vsphere
1963932 - Installation failures in bootstrap in OpenStack release jobs
1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail
1964059 - rebase openshift/sdn to kube 1.21.1
1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration
1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to "Unknown provider baremetal"
1964243 - The oc compliance fetch-raw
doesn’t work for disconnected cluster
1964270 - Failed to install 'cluster-kube-descheduler-operator' with error: "clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\": must be no more than 63 characters"
1964319 - Network policy "deny all" interpreted as "allow all" in description page
1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured
1964472 - Make project and namespace requirements more visible rather than giving me an error after submission
1964486 - Bulk adding of CIDR IPS to whitelist is not working
1964492 - Pick 102171: Implement support for watch initialization in P&F
1964625 - NETID duplicate check is only required in NetworkPolicy Mode
1964748 - Sync upstream 1.7.2 downstream
1964756 - PVC status is always in 'Bound' status when it is actually cloning
1964847 - Sanity check test suite missing from the repo
1964888 - opoenshift-apiserver imagestreamimports depend on >34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = "transport is closing"
1964936 - error log for "oc adm catalog mirror" is not correct
1964979 - Add mapping from ACI to infraenv to handle creation order issues
1964997 - Helm Library charts are showing and can be installed from Catalog
1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots
1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation
1965283 - 4.7->4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData:
1965330 - oc image extract fails due to security capabilities on files
1965334 - opm index add fails during image extraction
1965367 - Typo in in etcd-metric-serving-ca resource name
1965370 - "Route" is not translated in Korean or Chinese
1965391 - When storage class is already present wizard do not jumps to "Stoarge and nodes"
1965422 - runc is missing Provides oci-runtime in rpm spec
1965522 - [v2v] Multiple typos on VM Import screen
1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists
1965909 - Replace "Enable Taint Nodes" by "Mark nodes as dedicated"
1965921 - [oVirt] High performance VMs shouldn't be created with Existing policy
1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request
1966077 - hidden
descriptor is visible in the Operator instance details page1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11
1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality
1966138 - (release-4.8) Update K8s & OpenShift API versions
1966156 - Issue with Internal Registry CA on the service pod
1966174 - No storage class is installed, OCS and CNV installations fail
1966268 - Workaround for Network Manager not supporting nmconnections priority
1966401 - Revamp Ceph Table in Install Wizard flow
1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert
1966416 - (release-4.8) Do not exceed the data size limit
1966459 - 'policy/v1beta1 PodDisruptionBudget' and 'batch/v1beta1 CronJob' appear in image-registry-operator log
1966487 - IP address in Pods list table are showing node IP other than pod IP
1966520 - Add button from ocs add capacity should not be enabled if there are no PV's
1966523 - (release-4.8) Gather MachineAutoScaler definitions
1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed
1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug
1966602 - don't require manually setting IPv6DualStack feature gate in 4.8
1966620 - The bundle.Dockerfile in the repo is obsolete
1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install
1966654 - Alertmanager PDB is not created, but Prometheus UWM is
1966672 - Add Sprint 201 translations
1966675 - Admin console string updates
1966677 - Change comma to semicolon
1966683 - Translation bugs from Sprint 201 files
1966684 - Verify "Creating snapshot for claim <1>{pvcName}</1>" displays correctly
1966697 - Garbage collector logs every interval - move to debug level
1966717 - include full timestamps in the logs
1966759 - Enable downstream plugin for Operator SDK
1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version
1966813 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff
1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1
1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkub[e"
1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings "ipv6.dhcp-duid=ll" missing from dual stack install
1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image
1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored
1967197 - 404 errors loading some i18n namespaces
1967207 - Getting started card: console customization resources link shows other resources
1967208 - Getting started card should use semver library for parsing the version instead of string manipulation
1967234 - Console is continuously polling for ConsoleLink acm-link
1967275 - Awkward wrapping in getting started dashboard card
1967276 - Help menu tooltip overlays dropdown
1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check
1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit
1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion
1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests
1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small
1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion
1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit
1967595 - Fixes the remaining lint issues
1967614 - prometheus-k8s pods can't be scheduled due to volume node affinity conflict
1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn't work if ovirt-config.yaml doesn't exist and user should fill the FQDN URL
1967625 - Add OpenShift Dockerfile for cloud-provider-aws
1967631 - [4.8.0] Cluster install failed due to timeout while "Waiting for control plane"
1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkube"
1967639 - Console whitescreens if user preferences fail to load
1967662 - machine-api-operator should not use deprecated "platform" field in infrastructures.config.openshift.io
1967667 - Add Sprint 202 Round 1 translations
1967713 - Insights widget shows invalid link to the OCM
1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming
1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than "NoExecute"
1967803 - should update to 7.5.5 for grafana resources version label
1967832 - Add more tests for periodic.go
1967833 - Add tasks pool to tasks_processing
1967842 - Production logs are spammed on "OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS"
1967843 - Fix null reference to messagesToSearch in gather_logs.go
1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring
1967933 - Network-Tools debug scripts not working as expected
1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: "mkdir: cannot create directory '/var/lib/pgsql/data/userdata': Permission denied"
1968019 - drain timeout and pool degrading period is too short
1968067 - [master] Agent validation not including reason for being insufficient
1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed
1968175 - [4.8.0] Agent validation not including reason for being insufficient
1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration
1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn't be required
1968435 - [4.8.0] Unclear message in case of missing clusterImageSet
1968436 - Listeners timeout updated to remain using default value
1968449 - [4.8.0] Wrong Install-config override documentation
1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters
1968452 - [4.8.0] [doc] "Mirror Registry Configuration" doc section needs clarification of functionality and limitations
1968454 - [4.8.0] backend events generated with wrong namespace for agent
1968455 - [4.8.0] Assisted Service operator's controllers are starting before the base service is ready
1968515 - oc should set user-agent when talking with registry
1968531 - Sync upstream 1.8.0 downstream
1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn't clean up properly
1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted
1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox
1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil
1968701 - Bare metal IPI installation is failed due to worker inspection failure
1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed
1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning
1969284 - Console Query Browser: Can't reset zoom to fixed time range after dragging to zoom
1969315 - [4.8.0] BMAC doesn't check if ISO Url changed before queuing BMH for reconcile
1969352 - [4.8.0] Creating BareMetalHost without the "inspect.metal3.io" does not automatically add it
1969363 - [4.8.0] Infra env should show the time that ISO was generated.
1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it
1969386 - Filesystem's Utilization doesn't show in VM overview tab
1969397 - OVN bug causing subports to stay DOWN fails installations
1969470 - [4.8.0] Misleading error in case of install-config override bad input
1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step
1969525 - Replace golint with revive
1969535 - Topology edit icon does not link correctly when branch name contains slash
1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it
1969551 - [4.8.0] Assisted service times out on GetNextSteps due to
oc adm release infotaking too long
1969561 - Test "an end user can use OLM can subscribe to the operator" generates deprecation alert
1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire
1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io
1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1
1969626 - Portfoward stream cleanup can cause kubelet to panic
1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out
1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check
1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso
1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups
1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml
1969784 - WebTerminal widget should send resize events
1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails
1969891 - Fix rotated pipelinerun status icon issue in safari
1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse
1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available
1969951 - Cluster local doesn't work for knative services created from dev console
1969969 - ironic-rhcos-downloader container uses and old base image
1970062 - ccoctl does not work with STS authentication
1970068 - ovnkube-master logs "Failed to find node ips for gateway" error
1970126 - [4.8.0] Disable "metrics-events" when deploying using the operator
1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change
1970262 - [4.8.0] Remove Agent CRD Status fields not needed
1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs
1970269 - [4.8.0] missing role in agent CRD
1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs
1970381 - Monitoring dashboards: Custom time range inputs should retain their values
1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed
1970401 - [4.8.0] AgentLabelSelector is required yet not supported
1970415 - SR-IOV Docs needs documentation for disabling port security on a network
1970470 - Add pipeline annotation to Secrets which are created for a private repo
1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod
1970624 - 4.7->4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io
1970828 - "500 Internal Error" for all openshift-monitoring routes
1970975 - 4.7 -> 4.8 upgrades on AWS take longer than expected
1971068 - Removing invalid AWS instances from the CF templates
1971080 - 4.7->4.8 CI: KubePodNotReady due to MCD's 5m sleep between drain attempts
1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 !
1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces
1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing "Validated" condition about VIP not matching machine network
1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn't work - clusteroperator/kube-apiserver is not upgradeable
1971589 - [4.8.0] Telemetry-client won't report metrics in case the cluster was installed using the assisted operator
1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service
1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery
1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409)
1971739 - Keep /boot RW when kdump is enabled
1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly
1972128 - ironic-static-ip-manager container still uses 4.7 base image
1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are
1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster
1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted
1972262 - [4.8.0] "baremetalhost.metal3.io/detached" uses boolean value where string is expected
1972426 - Adopt failure can trigger deprovisioning
1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage
1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration
1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall
1972565 - performance issues due to lost node, pods taking too long to relaunch
1972662 - DPDK KNI modules need some additional tools
1972676 - Requirements for authenticating kernel modules with X.509
1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings
1972690 - [4.8.0] infra-env condition message isn't informative in case of missing pull secret
1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration
1972768 - kube-apiserver setup fail while installing SNO due to port being used
1972864 - New
local-with-fallback` service annotation does not preserve source IP
1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8
1973117 - No storage class is installed, OCS and CNV installations fail
1973233 - remove kubevirt images and references
1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld.
1973428 - Placeholder bug for OCP 4.8.0 image release
1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped
1973672 - fix ovn-kubernetes NetworkPolicy 4.7->4.8 upgrade issue
1973995 - [Feature:IPv6DualStack] tests are failing in dualstack
1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings
1974447 - Requirements for nvidia GPU driver container for driver toolkit
1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events.
1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel
1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion
1974746 - [4.8.0] File system usage not being logged appropriately
1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay.
1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster
1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string
1974850 - [4.8] coreos-installer failing Execshield
1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift
1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing
1975155 - Kubernetes service IP cannot be accessed for rhel worker
1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types
1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData
1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified
1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve
1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn
1975672 - [4.8.0] Production logs are spammed on "Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient"
1975789 - worker nodes rebooted when we simulate a case where the api-server is down
1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s]
1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn't work - ingresscontroller "default" is degraded
1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted
1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]
1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts
1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO
1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1
1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO
1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller
1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes
1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses
1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8
1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod
1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used
1980788 - NTO-shipped stalld can segfault
1981633 - enhance service-ca injection
1982250 - Performance Addon Operator fails to install after catalog source becomes ready
1982252 - olm Operator is in CrashLoopBackOff state with error "couldn't cleanup cross-namespace ownerreferences"
- References:
https://access.redhat.com/security/cve/CVE-2016-2183 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-15106 https://access.redhat.com/security/cve/CVE-2020-15112 https://access.redhat.com/security/cve/CVE-2020-15113 https://access.redhat.com/security/cve/CVE-2020-15114 https://access.redhat.com/security/cve/CVE-2020-15136 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-26541 https://access.redhat.com/security/cve/CVE-2020-28469 https://access.redhat.com/security/cve/CVE-2020-28500 https://access.redhat.com/security/cve/CVE-2020-28852 https://access.redhat.com/security/cve/CVE-2021-3114 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3636 https://access.redhat.com/security/cve/CVE-2021-20206 https://access.redhat.com/security/cve/CVE-2021-20271 https://access.redhat.com/security/cve/CVE-2021-20291 https://access.redhat.com/security/cve/CVE-2021-21419 https://access.redhat.com/security/cve/CVE-2021-21623 https://access.redhat.com/security/cve/CVE-2021-21639 https://access.redhat.com/security/cve/CVE-2021-21640 https://access.redhat.com/security/cve/CVE-2021-21648 https://access.redhat.com/security/cve/CVE-2021-22133 https://access.redhat.com/security/cve/CVE-2021-23337 https://access.redhat.com/security/cve/CVE-2021-23362 https://access.redhat.com/security/cve/CVE-2021-23368 https://access.redhat.com/security/cve/CVE-2021-23382 https://access.redhat.com/security/cve/CVE-2021-25735 https://access.redhat.com/security/cve/CVE-2021-25737 https://access.redhat.com/security/cve/CVE-2021-26539 https://access.redhat.com/security/cve/CVE-2021-26540 https://access.redhat.com/security/cve/CVE-2021-27292 https://access.redhat.com/security/cve/CVE-2021-28092 https://access.redhat.com/security/cve/CVE-2021-29059 https://access.redhat.com/security/cve/CVE-2021-29622 https://access.redhat.com/security/cve/CVE-2021-32399 https://access.redhat.com/security/cve/CVE-2021-33034 https://access.redhat.com/security/cve/CVE-2021-33194 https://access.redhat.com/security/cve/CVE-2021-33909 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ d4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26 Z7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs ImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h gep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8 TWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX X//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb 1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA mtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/ 0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6 dpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx +bLvJlpGiMg= =NJcM -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
- VDSM manages and monitors the host's storage, memory and networks as well as virtual machine creation, other host administration tasks, statistics gathering, and log collection.
Bug Fix(es):
-
An update in libvirt has changed the way block threshold events are submitted. As a result, the VDSM was confused by the libvirt event, and tried to look up a drive, logging a warning about a missing drive. In this release, the VDSM has been adapted to handle the new libvirt behavior, and does not log warnings about missing drives. (BZ#1948177)
-
Previously, when a virtual machine was powered off on the source host of a live migration and the migration finished successfully at the same time, the two events interfered with each other, and sometimes prevented migration cleanup resulting in additional migrations from the host being blocked. In this release, additional migrations are not blocked. (BZ#1959436)
-
Previously, when failing to execute a snapshot and re-executing it later, the second try would fail due to using the previous execution data. In this release, this data will be used only when needed, in recovery mode. (BZ#1984209)
-
Then engine deletes the volume and causes data corruption. 1998017 - Keep cinbderlib dependencies optional for 4.4.8
Bug Fix(es):
-
Documentation is referencing deprecated API for Service Export - Submariner (BZ#1936528)
-
Importing of cluster fails due to error/typo in generated command (BZ#1936642)
-
RHACM 2.2.2 images (BZ#1938215)
-
2.2 clusterlifecycle fails to allow provision
fips: true
clusters on aws, vsphere (BZ#1941778) -
Summary:
The Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1466", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "primavera unifier", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "17.7" }, { "model": "financial services crime and compliance management studio", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.0.8.3.0" }, { "model": "jd edwards enterpriseone tools", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "9.2.6.1" }, { "model": "health sciences data management workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.0.0.0" }, { "model": "primavera gateway", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "19.12.11" }, { "model": "banking trade finance process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "primavera gateway", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "17.12.0" }, { "model": "primavera gateway", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "20.12.0" }, { "model": "primavera unifier", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "17.12" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "banking trade finance process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "banking supply chain finance", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "communications cloud native core policy", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "1.11.0" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.12" }, { "model": "lodash", "scope": "lt", "trust": 1.0, "vendor": "lodash", "version": "4.17.21" }, { "model": "primavera gateway", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "20.12.7" }, { "model": "banking corporate lending process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "banking supply chain finance", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "primavera gateway", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "18.8.0" }, { "model": "banking trade finance process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "banking extensibility workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.12" }, { "model": "cloud manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "health sciences data management workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "2.5.2.1" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "communications cloud native core binding support function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "1.9.0" }, { "model": "banking corporate lending process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "enterprise communications broker", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.2.0" }, { "model": "banking credit facilities process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "banking extensibility workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "primavera gateway", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "19.12.0" }, { "model": "banking supply chain finance", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "communications design studio", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "7.4.2.0.0" }, { "model": "banking credit facilities process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "system manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": "9.0" }, { "model": "primavera gateway", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "18.8.12" }, { "model": "active iq unified manager", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "banking corporate lending process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "financial services crime and compliance management studio", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.0.8.2.0" }, { "model": "retail customer management and segmentation foundation", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.0" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "9.0" }, { "model": "banking extensibility workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "18.8" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.4" }, { "model": "enterprise communications broker", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.3.0" }, { "model": "primavera gateway", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "17.12.11" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "banking credit facilities process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "communications services gatekeeper", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "7.0" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.58" }, { "model": "lodash", "scope": "eq", "trust": 0.8, "vendor": "lodash", "version": "4.17.21" }, { "model": "lodash", "scope": "eq", "trust": 0.8, "vendor": "lodash", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-001309" }, { "db": "NVD", "id": "CVE-2021-23337" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:lodash:lodash:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "4.17.21", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:18.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "17.12", "versionStartIncluding": "17.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:19.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:retail_customer_management_and_segmentation_foundation:19.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_services_gatekeeper:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:20.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "17.12.11", "versionStartIncluding": "17.12.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "20.12.7", "versionStartIncluding": "20.12.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "19.12.11", "versionStartIncluding": "19.12.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "18.8.12", "versionStartIncluding": "18.8.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_design_studio:7.4.2.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:1.11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:1.9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.2.6.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:financial_services_crime_and_compliance_management_studio:8.0.8.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_data_management_workbench:2.5.2.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_data_management_workbench:3.0.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:financial_services_crime_and_compliance_management_studio:8.0.8.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:vmware_vsphere:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:linux:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:active_iq_unified_manager:-:*:*:*:*:windows:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:cloud_manager:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:system_manager:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-23337" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "CNNVD", "id": "CNNVD-202102-1137" } ], "trust": 1.3 }, "cve": "CVE-2021-23337", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "SINGLE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 6.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.0, "impactScore": 6.4, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:S/C:P/I:P/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "Single", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 6.5, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2021-23337", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:S/C:P/I:P/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "SINGLE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 6.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.0, "id": "VHN-381798", "impactScore": 6.4, "integrityImpact": "PARTIAL", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:S/C:P/I:P/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.2, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.2, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "HIGH", "scope": "UNCHANGED", "trust": 2.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 7.2, "baseSeverity": "High", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2021-23337", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "High", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-23337", "trust": 1.8, "value": "HIGH" }, { "author": "report@snyk.io", "id": "CVE-2021-23337", "trust": 1.0, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202102-1137", "trust": 0.6, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-381798", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-23337", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-381798" }, { "db": "VULMON", "id": "CVE-2021-23337" }, { "db": "JVNDB", "id": "JVNDB-2021-001309" }, { "db": "NVD", "id": "CVE-2021-23337" }, { "db": "NVD", "id": "CVE-2021-23337" }, { "db": "CNNVD", "id": "CNNVD-202102-1137" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Lodash versions prior to 4.17.21 are vulnerable to Command Injection via the template function. Lodash Contains a command injection vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or vendor announcements. Description:\n\nThe ovirt-engine package provides the manager for virtualization\nenvironments. \nThis manager enables admins to define hosts and networks, as well as to add\nstorage, create VMs and manage user permissions. \n\nBug Fix(es):\n\n* This release adds the queue attribute to the virtio-scsi driver in the\nvirtual machine configuration. This improvement enables multi-queue\nperformance with the virtio-scsi driver. (BZ#911394)\n\n* With this release, source-load-balancing has been added as a new\nsub-option for xmit_hash_policy. It can be configured for bond modes\nbalance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying\nxmit_hash_policy=vlan+srcmac. (BZ#1683987)\n\n* The default DataCenter/Cluster will be set to compatibility level 4.6 on\nnew installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)\n\n* With this release, support has been added for copying disks between\nregular Storage Domains and Managed Block Storage Domains. \nIt is now possible to migrate disks between Managed Block Storage Domains\nand regular Storage Domains. (BZ#1906074)\n\n* Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was\nset by default to false and was supposed to be uses in cluster\ncompatibility levels below 4.4. The value was set to general version. \nWith this release, each cluster level has it\u0027s own value, defaulting to\nfalse for 4.4 and above. This will reduce unnecessary overhead in removing\ntime outs of the file system freeze command. (BZ#1932284)\n\n* With this release, running virtual machines is supported for up to 16TB\nof RAM on x86_64 architectures. (BZ#1944723)\n\n* This release adds the gathering of oVirt/RHV related certificates to\nallow easier debugging of issues for faster customer help and issue\nresolution. \nInformation from certificates is now included as part of the sosreport. \nNote that no corresponding private key information is gathered, due to\nsecurity considerations. (BZ#1845877)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine\n1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors\n1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain\n1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine\n1717411 - improve engine logging when migration fail\n1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs\n1775145 - Incorrect message from hot-plugging memory\n1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. \n1845877 - [RFE] Collect information about RHV PKI\n1875363 - engine-setup failing on FIPS enabled rhel8 machine\n1906074 - [RFE] Support disks copy between regular and managed block storage domains\n1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration\n1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning\n1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. \n1919984 - engine-setup failse to deploy the grafana service in an external DWH server\n1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal\n1926018 - Failed to run VM after FIPS mode is enabled\n1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing \u0027rsyslog-gnutls\u0027 package. \n1928158 - Rename \u0027CA Certificate\u0027 link in welcome page to \u0027Engine CA certificate\u0027\n1928188 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929211 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error \"missing groups or modules: virt:8.4\"\n1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful\n1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured\n1932284 - Engine handled FS freeze is not fast enough for Windows systems\n1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed\n1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2\n1943267 - Snapshot creation is failing for VM having vGPU. \n1944723 - [RFE] Support virtual machines with 16TB memory\n1948577 - [welcome page] remove \"Infrastructure Migration\" section (obsoleted)\n1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule\n1949547 - rhv-log-collector-analyzer report contains \u0027b characters\n1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6\n1950466 - Host installation failed\n1954401 - HP VMs pinning is wiped after edit-\u003eok and pinned to first physical CPUs. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.8.2 bug fix and security update\nAdvisory ID: RHSA-2021:2438-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:2438\nIssue date: 2021-07-27\nCVE Names: CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 \n CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 \n CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 \n CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 \n CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 \n CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 \n CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 \n CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 \n CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 \n CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 \n CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 \n CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 \n CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 \n CVE-2021-33909 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.8.2 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.8.2. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2437\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nSecurity Fix(es):\n\n* SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n(CVE-2016-2183)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)\n\n* etcd: DoS in wal/wal.go (CVE-2020-15112)\n\n* etcd: directories created via os.MkdirAll are not checked for permissions\n(CVE-2020-15113)\n\n* etcd: gateway can include itself as an endpoint resulting in resource\nexhaustion and leads to DoS (CVE-2020-15114)\n\n* etcd: no authentication is performed against endpoints provided in the\n- --endpoints flag (CVE-2020-15136)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* golang: crypto/elliptic: incorrect operations on the P-224 curve\n(CVE-2021-3114)\n\n* containernetworking-cni: Arbitrary path injection via type field in CNI\nconfiguration (CVE-2021-20206)\n\n* containers/storage: DoS via malicious image (CVE-2021-20291)\n\n* prometheus: open redirect under the /new endpoint (CVE-2021-29622)\n\n* golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)\n\n* go.elastic.co/apm: leaks sensitive HTTP headers during panic\n(CVE-2021-22133)\n\nSpace precludes listing in detail the following additional CVEs fixes:\n(CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382),\n(CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and\n(CVE-2021-23368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-x86_64\n\nThe image digest is\nssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-s390x\n\nThe image digest is\nsha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le\n\nThe image digest is\nsha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f\n\nAll OpenShift Container Platform 4.8 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n1725981 - oc explain does not work well with full resource.group names\n1747270 - [osp] Machine with name \"\u003ccluster-id\u003e-worker\"couldn\u0027t join the cluster\n1772993 - rbd block devices attached to a host are visible in unprivileged container pods\n1786273 - [4.6] KAS pod logs show \"error building openapi models ... has invalid property: anyOf\" for CRDs\n1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts\n1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header\n1812212 - ArgoCD example application cannot be downloaded from github\n1817954 - [ovirt] Workers nodes are not numbered sequentially\n1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole\n1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1825417 - The containerruntimecontroller doesn\u0027t roll back to CR-1 if we delete CR-2\n1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades\n1835264 - Intree provisioner doesn\u0027t respect PVC.spec.dataSource sometimes\n1839101 - Some sidebar links in developer perspective don\u0027t follow same project\n1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes\n1846875 - Network setup test high failure rate\n1848151 - Console continues to poll the ClusterVersion resource when the user doesn\u0027t have authority\n1850060 - After upgrading to 3.11.219 timeouts are appearing. \n1852637 - Kubelet sets incorrect image names in node status images section\n1852743 - Node list CPU column only show usage\n1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values\n1857008 - [Edge] [BareMetal] Not provided STATE value for machines\n1857477 - Bad helptext for storagecluster creation\n1859382 - check-endpoints panics on graceful shutdown\n1862084 - Inconsistency of time formats in the OpenShift web-console\n1864116 - Cloud credential operator scrolls warnings about unsupported platform\n1866222 - Should output all options when runing `operator-sdk init --help`\n1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard\n1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert\n1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions\n1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host\n1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions\n1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go\n1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS\n1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag\n1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method\n1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics\n1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly\n1872659 - ClusterAutoscaler doesn\u0027t scale down when a node is not needed anymore\n1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack\n1873649 - proxy.config.openshift.io should validate user inputs\n1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials\n1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable\n1876918 - scheduler test leaves taint behind\n1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1\n1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable\n1878685 - Ingress resource with \"Passthrough\" annotation does not get applied when using the newer \"networking.k8s.io/v1\" API\n1879077 - Nodes tainted after configuring additional host iface\n1879140 - console auth errors not understandable by customers\n1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens\n1879184 - CVO must detect or log resource hotloops\n1879495 - [4.6] namespace \\\u201copenshift-user-workload-monitoring\\\u201d does not exist\u201d\n1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string\n1879944 - [OCP 4.8] Slow PV creation with vsphere\n1880757 - AWS: master not removed from LB/target group when machine deleted\n1880758 - Component descriptions in cloud console have bad description (Managed by Terraform)\n1881210 - nodePort for router-default metrics with NodePortService does not exist\n1881481 - CVO hotloops on some service manifests\n1881484 - CVO hotloops on deployment manifests\n1881514 - CVO hotloops on imagestreams from cluster-samples-operator\n1881520 - CVO hotloops on (some) clusterrolebindings\n1881522 - CVO hotloops on clusterserviceversions packageserver\n1881662 - Error getting volume limit for plugin kubernetes.io/\u003cname\u003e in kubelet logs\n1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io\n1881938 - migrator deployment doesn\u0027t tolerate masters\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883587 - No option for user to select volumeMode\n1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine\n1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster\n1884800 - Failed to set up mount unit: Invalid argument\n1885186 - Removing ssh keys MC does not remove the key from authorized_keys\n1885349 - [IPI Baremetal] Proxy Information Not passed to metal3\n1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses\n1886572 - auth: error contacting auth provider when extra ingress (not default) goes down\n1887849 - When creating new storage class failure_domain is missing. \n1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs\n1889689 - AggregatedAPIErrors alert may never fire\n1890678 - Cypress: Fix \u0027structure\u0027 accesibility violations\n1890828 - Intermittent prune job failures causing operator degradation\n1891124 - CP Conformance: CRD spec and status failures\n1891301 - Deleting bmh by \"oc delete bmh\u0027 get stuck\n1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass\n1891766 - [LSO] Min-Max filter\u0027s from OCS wizard accepts Negative values and that cause PV not getting created\n1892642 - oauth-server password metrics do not appear in UI after initial OCP installation\n1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version\n1893850 - Add an alert for requests rejected by the apiserver\n1893999 - can\u0027t login ocp cluster with oc 4.7 client without the username\n1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion\n1895053 - Allow builds to optionally mount in cluster trust stores\n1896226 - recycler-pod template should not be in kubelet static manifests directory\n1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name\n1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install\n1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout\n1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1899057 - fix spurious br-ex MAC address error log\n1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay\n1899587 - [External] RGW usage metrics shown on Object Service Dashboard is incorrect\n1900454 - Enable host-based disk encryption on Azure platform\n1900819 - Scaled ingress replicas following sharded pattern don\u0027t balance evenly across multi-AZ\n1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed\n1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API\n1901648 - \"do you need to set up custom dns\" tooltip inaccurate\n1902003 - Jobs Completions column is not sorting when there are \"0 of 1\" and \"1 of 1\" in the list. \n1902076 - image registry operator should monitor status of its routes\n1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs\n1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given\n1903228 - Pod stuck in Terminating, runc init process frozen\n1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed\n1903553 - systemd container renders node NotReady after deleting it\n1903700 - metal3 Deployment doesn\u0027t have unique Pod selector\n1904006 - The --dir option doest not work for command `oc image extract`\n1904505 - Excessive Memory Use in Builds\n1904507 - vsphere-problem-detector: implement missing metrics\n1904558 - Random init-p error when trying to start pod\n1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests\n1905147 - ConsoleQuickStart Card\u0027s prerequisites is a combined text instead of a list\n1905159 - Installation on previous unused dasd fails after formatting\n1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory\n1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails\n1905577 - Control plane machines not adopted when provisioning network is disabled\n1905627 - Warn users when using an unsupported browser such as IE\n1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP\n1905849 - Default volumesnapshotclass should be created when creating default storageclass\n1906056 - Bundles skipped via the `skips` field cannot be pinned\n1906102 - CBO produces standard metrics\n1906147 - ironic-rhcos-downloader should not use --insecure\n1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart\n1906740 - [aws]Machine should be \"Failed\" when creating a machine with invalid region\n1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage\n1907315 - the internal load balancer annotation for AWS should use \"true\" instead of \"0.0.0.0/0\" as value\n1907353 - [4.8] OVS daemonset is wasting resources even though it doesn\u0027t do anything\n1907614 - Update kubernetes deps to 1.20\n1908068 - Enable DownwardAPIHugePages feature gate\n1908169 - The example of Import URL is \"Fedora cloud image list\" for all templates. \n1908170 - sriov network resource injector: Hugepage injection doesn\u0027t work with mult container\n1908343 - Input labels in Manage columns modal should be clickable\n1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures\n1908655 - \"Evaluating rule failed\" for \"record: node:node_num_cpu:sum\" rule\n1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes\n1908765 - [SCALE] enable OVN lflow data path groups\n1908774 - [SCALE] enable OVN DB memory trimming on compaction\n1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it\n1909091 - Pod/node/ip/template isn\u0027t showing when vm is running\n1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing\n1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade\n1910067 - UPI: openstacksdk fails on \"server group list\"\n1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing\n1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn\u0027t match node selector: AWS compute machines without status\n1910378 - socket timeouts for webservice communication between pods\n1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling\n1910500 - Could not list CSI provisioner on web when create storage class on GCP platform\n1911211 - Should show the cert-recovery-controller version correctly\n1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames\n1912571 - libvirt: Support setting dnsmasq options through the install config\n1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1913112 - BMC details should be optional for unmanaged hosts\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913341 - GCP: strange cluster behavior in CI run\n1913399 - switch to v1beta1 for the priority and fairness APIs\n1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint\n1913532 - After a 4.6 to 4.7 upgrade, a node went unready\n1913974 - snapshot test periodically failing with \"can\u0027t open \u0027/mnt/test/data\u0027: No such file or directory\"\n1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs\n1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root\n1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20\n1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names\n1915693 - Not able to install gpu-operator on cpumanager enabled node. \n1915971 - Role and Role Binding breadcrumbs do not work as expected\n1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page\n1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall\n1916392 - scrape priority and fairness endpoints for must-gather\n1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form\n1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with \"Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready\"\n1916553 - Default template\u0027s description is empty on details tab\n1916593 - Destroy cluster sometimes stuck in a loop\n1916872 - need ability to reconcile exgw annotations on pod add\n1916890 - [OCP 4.7] api or api-int not available during installation\n1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. \n1917282 - [Migration] MCO stucked for rhel worker after enable the migration prepare state\n1917328 - It should default to current namespace when create vm from template action on details page\n1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with \"cannot go from state \u0027deploy failed\u0027 to state \u0027manageable\u0027\"\n1917485 - [oVirt] ovirt machine/machineset object has missing some field validations\n1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. \n1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3\n1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library\n1918101 - [vsphere]Delete Provisioning machine took about 12 minutes\n1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass\n1918442 - Service Reject ACL does not work on dualstack\n1918723 - installer fails to write boot record on 4k scsi lun on s390x\n1918729 - Add hide/reveal button for the token field in the KMS configuration page\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1918785 - Pod request and limit calculations in console are incorrect\n1918910 - Scale from zero annotations should not requeue if instance type missing\n1919032 - oc image extract - will not extract files from image rootdir - \"error: unexpected directory from mapping tests.test\"\n1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0\n1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone\n1919168 - `oc adm catalog mirror` doesn\u0027t work for the air-gapped cluster\n1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize\n1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster\n1919356 - Add missing profile annotation in cluster-update-keys manifests\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic\n1919406 - OperatorHub filter heading \"Provider Type\" should be \"Source\"\n1919737 - hostname lookup delays when master node down\n1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade\n1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests\n1920300 - cri-o does not support configuration of stream idle time\n1920307 - \"VM not running\" should be \"Guest agent required\" on vm details page in dev console\n1920532 - Problem in trying to connect through the service to a member that is the same as the caller. \n1920677 - Various missingKey errors in the devconsole namespace\n1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources\n1920901 - [4.7]\"500 Internal Error\" for prometheus route in https_proxy cluster\n1920903 - oc adm top reporting unknown status for Windows node\n1920905 - Remove DNS lookup workaround from cluster-api-provider\n1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard\n1921184 - kuryr-cni binds to wrong interface on machine with two interfaces\n1921227 - Fix issues related to consuming new extensions in Console static plugins\n1921264 - Bundle unpack jobs can hang indefinitely\n1921267 - ResourceListDropdown not internationalized\n1921321 - SR-IOV obliviously reboot the node\n1921335 - ThanosSidecarUnhealthy\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel]\n1921763 - operator registry has high memory usage in 4.7... cleanup row closes\n1921778 - Push to stage now failing with semver issues on old releases\n1921780 - Search page not fully internationalized\n1921781 - DefaultList component not internationalized\n1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes\n1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often\n1921892 - MAO: controller runtime manager closes event recorder\n1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated\n1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label\n1921953 - ClusterServiceVersion property inference does not infer package and version\n1922063 - \"Virtual Machine\" should be \"Templates\" in template wizard\n1922065 - Rootdisk size is default to 15GiB in customize wizard\n1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch\n1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted\n1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt\n1922646 - Panic in authentication-operator invoking webhook authorization\n1922648 - FailedCreatePodSandBox due to \"failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists\"\n1922764 - authentication operator is degraded due to number of kube-apiservers\n1922992 - some button text on YAML sidebar are not translated\n1922997 - [Migration]The SDN migration rollback failed. \n1923038 - [OSP] Cloud Info is loaded twice\n1923157 - Ingress traffic performance drop due to NodePort services\n1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. \n1923811 - Registry claims Available=True despite .status.readyReplicas == 0 while .spec.replicas == 2\n1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors\n1923984 - Incorrect anti-affinity for UWM prometheus\n1924020 - panic: runtime error: index out of range [0] with length 0\n1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true\n1924083 - \"Activity\" Pane of Persistent Storage tab shows events related to Noobaa too\n1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable\n1924171 - ovn-kube must handle single-stack to dual-stack migration\n1924358 - metal UPI setup fails, no worker nodes\n1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument\n1924536 - \u0027More about Insights\u0027 link points to support link\n1924585 - \"Edit Annotation\" are not correctly translated in Chinese\n1924586 - Control Plane status and Operators status are not fully internationalized\n1924641 - [User Experience] The message \"Missing storage class\" needs to be displayed after user clicks Next and needs to be rephrased\n1924663 - Insights operator should collect related pod logs when operator is degraded\n1924701 - Cluster destroy fails when using byo with Kuryr\n1924728 - Difficult to identify deployment issue if the destination disk is too small\n1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086)\n1924747 - InventoryItem doesn\u0027t internationalize resource kind\n1924788 - Not clear error message when there are no NADs available for the user\n1924816 - Misleading error messages in ironic-conductor log\n1924869 - selinux avc deny after installing OCP 4.7\n1924916 - PVC reported as Uploading when it is actually cloning\n1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces\n1924953 - newly added \u0027excessive etcd leader changes\u0027 test case failing in serial job\n1924968 - Monitoring list page filter options are not translated\n1924983 - some components in utils directory not localized\n1925017 - [UI] VM Details-\u003e Network Interfaces, \u0027Name,\u0027 is displayed instead on \u0027Name\u0027\n1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -\u003e 4.7 upgrade due to series churn\n1925083 - Some texts are not marked for translation on idp creation page. \n1925087 - Add i18n support for the Secret page\n1925148 - Shouldn\u0027t create the redundant imagestream when use `oc new-app --name=testapp2 -i ` with exist imagestream\n1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard\n1925216 - openshift installer fails immediately failed to fetch Install Config\n1925236 - OpenShift Route targets every port of a multi-port service\n1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload\u0027s service\n1925261 - Items marked as mandatory in KMS Provider form are not enforced\n1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot\n1925343 - [ci] e2e-metal tests are not using reserved instances\n1925493 - Enable snapshot e2e tests\n1925586 - cluster-etcd-operator is leaking transports\n1925614 - Error: InstallPlan.operators.coreos.com not found\n1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers\n1926029 - [RFE] Either disable save or give warning when no disks support snapshot\n1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. \n1926072 - Close button (X) does not work in the new \"Storage cluster exists\" Warning alert message(introduced via fix for Bug 1867400)\n1926082 - Insights operator should not go degraded during upgrade\n1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized\n1926115 - Texts in \u201cInsights\u201d popover on overview page are not marked for i18n\n1926123 - Pseudo bug: revert \"force cert rotation every couple days for development\" in 4.7\n1926126 - some kebab/action menu translation issues\n1926131 - Add HPA page is not fully internationalized\n1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it\n1926154 - Create new pool with arbiter - wrong replica\n1926278 - [oVirt] consume K8S 1.20 packages\n1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning\n1926285 - ignore pod not found status messages\n1926289 - Accessibility: Modal content hidden from screen readers\n1926310 - CannotRetrieveUpdates alerts on Critical severity\n1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. \n1926336 - Service details can overflow boxes at some screen widths\n1926346 - move to go 1.15 and registry.ci.openshift.org\n1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM\n1926465 - bootstrap kube-apiserver does not have --advertise-address set \u2013 was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn\u0027t have IPv6 endpoints\n1926484 - API server exits non-zero on 2 SIGTERM signals\n1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag\n1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log\n1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results\n1926776 - \"Template support\" modal appears when select the RHEL6 common template\n1926835 - [e2e][automation] prow gating use unsupported CDI version\n1926843 - pipeline with finally tasks status is improper\n1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment \"lost\" the `resources` section. \n1926903 - NTO may fail to disable stalld when relying on Tuned \u0027[service]\u0027 plugin\n1926931 - Inconsistent ovs-flow rule on one of the app node for egress node\n1926943 - vsphere-problem-detector: Alerts in CI jobs\n1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs\n1927013 - Tables don\u0027t render properly at smaller screen widths\n1927017 - CCO does not relinquish leadership when restarting for proxy CA change\n1927042 - Empty static pod files on UPI deployments are confusing\n1927047 - multiple external gateway pods will not work in ingress with IP fragmentation\n1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64\n1927075 - [e2e][automation] Fix pvc string in pvc.view\n1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page\n1927244 - UPI installation with Kuryr timing out on bootstrap stage\n1927263 - kubelet service takes around 43 secs to start container when started from stopped state\n1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver\n1927310 - Performance: Console makes unnecessary requests for en-US messages on load\n1927340 - Race condition in OperatorCondition reconcilation\n1927366 - OVS configuration service unable to clone NetworkManager\u0027s connections in the overlay FS\n1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady\n1927393 - 4.7 still points to 4.6 catalog images\n1927397 - p\u0026f: add auto update for priority \u0026 fairness bootstrap configuration objects\n1927423 - Happy \"Not Found\" and no visible error messages on error-list page when /silences 504s\n1927465 - Homepage dashboard content not internationalized\n1927678 - Reboot interface defaults to softPowerOff so fencing is too slow\n1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev\n1927797 - \u0027Pod(s)\u0027 should be included in the pod donut label when a horizontal pod autoscaler is enabled\n1927882 - Can\u0027t create cluster role binding from UI when a project is selected\n1927895 - global RuntimeConfig is overwritten with merge result\n1927898 - i18n Admin Notifier\n1927902 - i18n Cluster Utilization dashboard duration\n1927903 - \"CannotRetrieveUpdates\" - critical error in openshift web console\n1927925 - Manually misspelled as Manualy\n1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array\n1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart\n1927944 - cluster version operator cycles terminating state waiting for leader election\n1927993 - Documentation Links in OKD Web Console are not Working\n1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode\n1928045 - N+1 scaling Info message says \"single zone\" even if the nodes are spread across 2 or 0 zones\n1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV\n1928157 - 4.7 CNO claims to be done upgrading before it even starts\n1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured\n1928297 - HAProxy fails with 500 on some requests\n1928473 - NetworkManager overlay FS not being created on None platform\n1928512 - sap license management logs gatherer\n1928537 - Cannot IPI with tang/tpm disk encryption\n1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS\n1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release\n1928850 - Unable to pull images due to limited quota on Docker Hub\n1928851 - manually creating NetNamespaces will break things and this is not obvious\n1928867 - golden images - DV should not be created with WaitForFirstConsumer\n1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1\n1928875 - Update translations\n1928893 - Memory Pressure Drop Down Info is stating \"Disk\" capacity is low instead of memory\n1928931 - DNSRecord CRD is using deprecated v1beta1 API\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929052 - Add new Jenkins agent maven dir for 3.6\n1929056 - kube-apiserver-availability.rules are failing evaluation\n1929110 - LoadBalancer service check test fails during vsphere upgrade\n1929136 - openshift isn\u0027t able to mount nfs manila shares to pods\n1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner\n1929243 - Namespace column missing in Nodes Node Details / pods tab\n1929277 - Monitoring workloads using too high a priorityclass\n1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1\n1929314 - ovn-kubernetes endpoint slice controller doesn\u0027t run on CI jobs\n1929359 - etcd-quorum-guard uses origin-cli [4.8]\n1929577 - Edit Application action overwrites Deployment envFrom values on save\n1929654 - Registry for Azure uses legacy V1 StorageAccount\n1929693 - Pod stuck at \"ContainerCreating\" status\n1929733 - oVirt CSI driver operator is constantly restarting\n1929769 - Getting 404 after switching user perspective in another tab and reload Project details\n1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow\n1929824 - fix alerting on volume name check for vsphere\n1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade\n1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost\n1930007 - filter dropdown item filter and resource list dropdown item filter doesn\u0027t support multi selection\n1930015 - OS list is overlapped by buttons in template wizard\n1930064 - Web console crashes during VM creation from template when no storage classes are defined\n1930220 - Cinder CSI driver is not able to mount volumes under heavier load\n1930240 - Generated clouds.yaml incomplete when provisioning network is disabled\n1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console\n1930268 - intel vfio devices are not expose as resources\n1930356 - Darwin binary missing from mirror.openshift.com\n1930393 - Gather info about unhealthy SAP pods\n1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console\n1930570 - Jenkins templates are displayed in Developer Catalog twice\n1930620 - the logLevel field in containerruntimeconfig can\u0027t be set to \"trace\"\n1930631 - Image local-storage-mustgather in the doc does not come from product registry\n1930893 - Backport upstream patch 98956 for pod terminations\n1931005 - Related objects page doesn\u0027t show the object when its name is empty\n1931103 - remove periodic log within kubelet\n1931115 - Azure cluster install fails with worker type workers Standard_D4_v2\n1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups\n1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS\n1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy\n1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start of the Keepalived container\n1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails\n1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses)\n1931629 - Conversational Hub Fails due to ImagePullBackOff\n1931637 - Kubeturbo Operator fails due to ImagePullBackOff\n1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. \n1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint\n1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods\n1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently\n1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff\n1931949 - Red Hat Integration Camel-K Operator keeps stuck in Pending state\n1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6\n1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7\n1932001 - Only one of multiple subscriptions to the same package is honored\n1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown\n1932105 - machine-config ClusterOperator claims level while control-plane still updating\n1932133 - AWS EBS CSI Driver doesn\u2019t support \u201ccsi.storage.k8s.io/fsTyps\u201d parameter\n1932135 - When \u201ciopsPerGB\u201d parameter is not set, event for AWS EBS CSI Driver provisioning is not clear\n1932152 - When \u201ciopsPerGB\u201d parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear\n1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors\n1932182 - catalog operator causing CPU spikes and bad etcd performance\n1932229 - Can\u2019t find kubelet metrics for aws ebs csi volumes\n1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered\n1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the \"allowedIframeHostnames\" option can lead to bypass hostname whitelist for iframe element\n1932324 - CRIO fails to create a Pod in sandbox stage - starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \\\"\\n\"\n1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation\n1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new \"canary\" route\n1932453 - Update Japanese timestamp format\n1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue\n1932487 - [OKD] origin-branding manifest is missing cluster profile annotations\n1932502 - Setting MTU for a bond interface using Kernel arguments is not working\n1932618 - Alerts during a test run should fail the test job, but were not\n1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be\n1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy\n1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made\n1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network\n1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM\n1932805 - e2e: test OAuth API connections in the tests by that name\n1932816 - No new local storage operator bundle image is built\n1932834 - enforce the use of hashed access/authorize tokens\n1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console\n1933102 - Canary daemonset uses default node selector\n1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal]\n1933159 - multus DaemonSets should use maxUnavailable: 33%\n1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10%\n1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10%\n1933179 - network-check-target DaemonSet should use maxUnavailable: 10%\n1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10%\n1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10%\n1933263 - user manifest with nodeport services causes bootstrap to block\n1933269 - Cluster unstable replacing an unhealthy etcd member\n1933284 - Samples in CRD creation are ordered arbitarly\n1933414 - Machines are created with unexpected name for Ports\n1933599 - bump k8s.io/apiserver to 1.20.3\n1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like \":\"\n1933664 - Getting Forbidden for image in a container template when creating a sample app\n1933708 - Grafana is not displaying deployment config resources in dashboard `Default /Kubernetes / Compute Resources / Namespace (Workloads)`\n1933711 - EgressDNS: Keep short lived records at most 30s\n1933730 - [AI-UI-Wizard] Toggling \"Use extra disks for local storage\" checkbox highlights the \"Next\" button to move forward but grays out once clicked\n1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively\n1933772 - MCD Crash Loop Backoff\n1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior\n1933857 - Details page can throw an uncaught exception if kindObj prop is undefined\n1933880 - Kuryr-Controller crashes when it\u0027s missing the status object\n1934021 - High RAM usage on machine api termination node system oom\n1934071 - etcd consuming high amount of memory and CPU after upgrade to 4.6.17\n1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade\n1934085 - Scheduling conformance tests failing in a single node cluster\n1934107 - cluster-authentication-operator builds URL incorrectly for IPv6\n1934112 - Add memory and uptime metadata to IO archive\n1934113 - mcd panic when there\u0027s not enough free disk space\n1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP\n1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh\n1934174 - rootfs too small when enabling NBDE\n1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3\n1934177 - knative-camel-operator CreateContainerError \"container_linux.go:366: starting container process caused: chdir to cwd (\\\"/home/nonroot\\\") set in config.json failed: permission denied\"\n1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0\n1934229 - List page text filter has input lag\n1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions\n1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady\n1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods\n1934556 - OCP-Metal images\n1934557 - RHCOS boot image bump for LUKS fixes\n1934643 - Need BFD failover capability on ECMP routes\n1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10%\n1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP)\n1934905 - CoreDNS\u0027s \"errors\" plugin is not enabled for custom upstream resolvers\n1935058 - Can\u2019t finish install sts clusters on aws government region\n1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login\n1935155 - IGMP/MLD packets being dropped\n1935157 - [e2e][automation] environment tests broken\n1935165 - OCP 4.6 Build fails when filename contains an umlaut\n1935176 - Missing an indication whether the deployed setup is SNO. \n1935269 - Topology operator group shows child Jobs. Not shown in details view\u0027s resources. \n1935419 - Failed to scale worker using virtualmedia on Dell R640\n1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting\n1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7\n1935541 - console operator panics in DefaultDeployment with nil cm\n1935582 - prometheus liveness probes cause issues while replaying WAL\n1935604 - high CPU usage fails ingress controller\n1935667 - pipelinerun status icon rendering issue\n1935706 - test: Detect when the master pool is still updating after upgrade\n1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data]\n1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text\n1935909 - New CSV using ServiceAccount named \"default\" stuck in Pending during upgrade\n1936022 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s terminationGracePeriod and service\u0027s clusterIPs\n1936030 - Ingress operator performs spurious updates in response to API\u0027s defaulting of NodePort service\u0027s clusterIPs field\n1936223 - The IPI installer has a typo. It is missing the word \"the\" in \"the Engine\". \n1936336 - Updating multus-cni builder \u0026 base images to be consistent with ART 4.8 (closed)\n1936342 - kuryr-controller restarting after 3 days cluster running - pools without members\n1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623\n1936488 - [sig-instrumentation][Late] Alerts shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error\n1936515 - sdn-controller is missing some health checks\n1936534 - When creating a worker with a used mac-address stuck on registering\n1936585 - configure alerts if the catalogsources are missing\n1936620 - OLM checkbox descriptor renders switch instead of checkbox\n1936721 - network-metrics-deamon not associated with a priorityClassName\n1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear\n1936785 - Configmap gatherer doesn\u0027t include namespace name (in the archive path) in case of a configmap with binary data\n1936788 - RBD RWX PVC creation with Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection\n1936798 - Authentication log gatherer shouldn\u0027t scan all the pod logs in the openshift-authentication namespace\n1936801 - Support ServiceBinding 0.5.0+\n1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow\n1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies\n1936859 - ovirt 4.4 -\u003e 4.5 upgrade jobs are permafailing\n1936867 - Periodic vsphere IPI install is broken - missing pip\n1936871 - [Cinder CSI] Topology aware provisioning doesn\u0027t work when Nova and Cinder AZs are different\n1936904 - Wrong output YAML when syncing groups without --confirm\n1936983 - Topology view - vm details screen isntt stop loading\n1937005 - when kuryr quotas are unlimited, we should not sent alerts\n1937018 - FilterToolbar component does not handle \u0027null\u0027 value for \u0027rowFilters\u0027 prop\n1937020 - Release new from image stream chooses incorrect ID based on status\n1937077 - Blank White page on Topology\n1937102 - Pod Containers Page Not Translated\n1937122 - CAPBM changes to support flexible reboot modes\n1937145 - [Local storage] PV provisioned by localvolumeset stays in \"Released\" status after the pod/pvc deleted\n1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes\n1937244 - [Local Storage] The model name of aws EBS doesn\u0027t be extracted well\n1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes\n1937452 - cluster-network-operator CI linting fails in master branch\n1937459 - Wrong Subnet retrieved for Service without Selector\n1937460 - [CI] Network quota pre-flight checks are failing the installation\n1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster\n1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation\n1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint\n1937535 - Not all image pulls within OpenShift builds retry\n1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes\n1937627 - Bump DEFAULT_DOC_URL for 4.8\n1937628 - Bump upgrade channels for 4.8\n1937658 - Description for storage class encryption during storagecluster creation needs to be updated\n1937666 - Mouseover on headline\n1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage\n1937693 - ironic image \"/\" cluttered with files\n1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController\n1937717 - If browser default font size is 20, the layout of template screen breaks\n1937722 - OCP 4.8 vuln due to BZ 1936445\n1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator\n1937941 - [RFE]fix wording for favorite templates\n1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations\n1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1938321 - Cannot view PackageManifest objects in YAML on \u0027Home \u003e Search\u0027 page nor \u0027CatalogSource details \u003e Operators tab\u0027\n1938465 - thanos-querier should set a CPU request on the thanos-query container\n1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container\n1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them\n1938468 - kube-scheduler-operator has a container without a CPU request\n1938492 - Marketplace extract container does not request CPU or memory\n1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not\n1938636 - Can\u0027t set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller\n1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph\n1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10%\n1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances\n1938949 - [VPA] Updater failed to trigger evictions due to \"vpa-admission-controller\" not found\n1939054 - machine healthcheck kills aws spot instance before generated\n1939060 - CNO: nodes and masters are upgrading simultaneously\n1939069 - Add source to vm template silently failed when no storage class is defined in the cluster\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1939168 - Builds failing for OCP 3.11 since PR#25 was merged\n1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz\n1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez\n1939232 - CI tests using openshift/hello-world broken by Ruby Version Update\n1939270 - fix co upgradeableFalse status and reason\n1939294 - OLM may not delete pods with grace period zero (force delete)\n1939412 - missed labels for thanos-ruler pods\n1939485 - CVE-2021-20291 containers/storage: DoS via malicious image\n1939547 - Include container=\"POD\" in resource queries\n1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0\n1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated\n1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs\n1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent\n1939661 - support new AWS region ap-northeast-3\n1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution\n1939731 - Image registry operator reports unavailable during normal serial run\n1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters\n1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase\n1939752 - ovnkube-master sbdb container does not set requests on cpu or memory\n1939753 - Delete HCO is stucking if there is still VM in the cluster\n1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page\n1939853 - [DOC] Creating manifests API should not allow folder in the \"file_name\"\n1939865 - GCP PD CSI driver does not have CSIDriver instance\n1939869 - [e2e][automation] Add annotations to datavolume for HPP\n1939873 - Unlimited number of characters accepted for base domain name\n1939943 - `cluster-kube-apiserver-operator check-endpoints` observed a panic: runtime error: invalid memory address or nil pointer dereference\n1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration\n1940057 - Openshift builds should use a wach instead of polling when checking for pod status\n1940142 - 4.6-\u003e4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying\n1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network\n1940206 - Selector and VolumeTableRows not i18ned\n1940207 - 4.7-\u003e4.6 rollbacks stuck on prometheusrules admission webhook \"no route to host\"\n1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads)\n1940318 - No data under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027\n1940322 - Split of dashbard is wrong, many Network parts\n1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn\u0027t have flavors needed for compute machines\n1940361 - [e2e][automation] Fix vm action tests with storageclass HPP\n1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters\n1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys\n1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages\n1940499 - hybrid-overlay not logging properly before exiting due to an error\n1940518 - Components in bare metal components lack resource requests\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned\n1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info\n1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list\n1940876 - Components in ovirt components lack resource requests\n1940889 - Installation failures in OpenStack release jobs\n1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io\n1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP\n1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster\n1940950 - vsphere: client/bootstrap CSR double create\n1940972 - vsphere: [4.6] CSR approval delayed for unknown reason\n1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. \n1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy\n1941342 - Add `kata-osbuilder-generate.service` as part of the default presets\n1941456 - Multiple pods stuck in ContainerCreating status with the message \"failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user\" being seen in the journal log\n1941526 - controller-manager-operator: Observed a panic: nil pointer dereference\n1941592 - HAProxyDown not Firing\n1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp\n1941625 - Developer -\u003e Topology - i18n misses\n1941635 - Developer -\u003e Monitoring - i18n misses\n1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid\n1941645 - Developer -\u003e Builds - i18n misses\n1941655 - Developer -\u003e Pipelines - i18n misses\n1941667 - Developer -\u003e Project - i18n misses\n1941669 - Developer -\u003e ConfigMaps - i18n misses\n1941759 - Errored pre-flight checks should not prevent install\n1941798 - Some details pages don\u0027t have internationalized ResourceKind labels\n1941801 - Many filter toolbar dropdowns haven\u0027t been internationalized\n1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view\n1941859 - [assisted operator] assisted pod deploy first time in error state\n1941901 - Toleration merge logic does not account for multiple entries with the same key\n1941915 - No validation against template name in boot source customization\n1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description\n1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8\n1941990 - Pipeline metrics endpoint changed in osp-1.4\n1941995 - fix backwards incompatible trigger api changes in osp1.4\n1942086 - Administrator -\u003e Home - i18n misses\n1942117 - Administrator -\u003e Workloads - i18n misses\n1942125 - Administrator -\u003e Serverless - i18n misses\n1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup)\n1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail\n1942271 - Insights operator doesn\u0027t gather pod information from openshift-cluster-version\n1942375 - CRI-O failing with error \"reserving ctr name\"\n1942395 - The status is always \"Updating\" on dc detail page after deployment has failed. \n1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied\n1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate\n1942536 - Corrupted image preventing containers from starting\n1942548 - Administrator -\u003e Networking - i18n misses\n1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic\n1942555 - Network policies in ovn-kubernetes don\u0027t support external traffic from router when the endpoint publishing strategy is HostNetwork\n1942557 - Query is reporting \"no datapoint\" when label cluster=\"\" is set but work when the label is removed or when running directly in Prometheus\n1942608 - crictl cannot list the images with an error: error locating item named \"manifest\" for image with ID\n1942614 - Administrator -\u003e Storage - i18n misses\n1942641 - Administrator -\u003e Builds - i18n misses\n1942673 - Administrator -\u003e Pipelines - i18n misses\n1942694 - Resource names with a colon do not display property in the browser window title\n1942715 - Administrator -\u003e User Management - i18n misses\n1942716 - Quay Container Security operator has Medium \u003c-\u003e Low colors reversed\n1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8]\n1942736 - Administrator -\u003e Administration - i18n misses\n1942749 - Install Operator form should use info icon for popovers\n1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls\n1942839 - Windows VMs fail to start on air-gapped environments\n1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set\n1942858 - [RFE]Confusing detach volume UX\n1942883 - AWS EBS CSI driver does not support partitions\n1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy\n1942935 - must-gather improvements\n1943145 - vsphere: client/bootstrap CSR double create\n1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2)\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked\n1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest\n1943238 - The conditions table does not occupy 100% of the width. \n1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane\n1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. \n1943315 - avoid workload disruption for ICSP changes\n1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes\n1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest\n1943356 - Dynamic plugins surfaced in the UI should be referred to as \"Console plugins\"\n1943539 - crio-wipe is failing to start \"Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container\"\n1943543 - DeploymentConfig Rollback doesn\u0027t reset params correctly\n1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement\n1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds\n1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage\n1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn\n1943649 - don\u0027t use hello-openshift for network-check-target\n1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress\n1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions\n1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB\n1943845 - Router pods should have startup probes configured\n1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors\n1944160 - CNO: nbctl daemon should log reconnection info\n1944180 - OVN-Kube Master does not release election lock on shutdown\n1944246 - Ironic fails to inspect and move node to \"manageable\u0027 but get bmh remains in \"inspecting\"\n1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region\n1944509 - Translatable texts without context in ssh expose component\n1944581 - oc project not works with cluster proxy\n1944587 - VPA could not take actions based on the recommendation when min-replicas=1\n1944590 - The field name \"VolumeSnapshotContent\" is wrong on VolumeSnapshotContent detail page\n1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1944631 - openshif authenticator should not accept non-hashed tokens\n1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with \".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock\"\n1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures\n1944674 - Project field become to \"All projects\" and disabled in \"Review and create virtual machine\" step in devconsole\n1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods\n1944761 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present\n1944763 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944853 - Update to nodejs \u003e=14.15.4 for ARM\n1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts\n1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation\n1945027 - Button \u0027Copy SSH Command\u0027 does not work\n1945085 - Bring back API data in etcd test\n1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled\n1945103 - \u0027User credentials\u0027 shows even the VM is not running\n1945104 - In k8s 1.21 bump \u0027[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume\u0027 tests are disabled\n1945146 - Remove pipeline Tech preview badge for pipelines GA operator\n1945236 - Bootstrap ignition shim doesn\u0027t follow proxy settings\n1945261 - Operator dependency not consistently chosen from default channel\n1945312 - project deletion does not reset UI project context\n1945326 - console-operator: does not check route health periodically\n1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules\n1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly\n1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1945548 - catalog resource update failed if spec.secrets set to \"\"\n1945584 - Elasticsearch operator fails to install on 4.8 cluster on ppc64le/s390x\n1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION\n1945630 - Pod log filename no longer in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin\n1945646 - gcp-routes.sh running as initrc_t unnecessarily\n1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret\n1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry\n1945687 - Dockerfile needs updating to new container CI registry\n1945700 - Syncing boot mode after changing device should be restricted to Supermicro\n1945816 - \" Ingresses \" should be kept in English for Chinese\n1945818 - Chinese translation issues: Operator should be the same with English `Operators`\n1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out\n1945910 - [aws] support byo iam roles for instances\n1945948 - SNO: pods can\u0027t reach ingress when the ingress uses a different IPv6. \n1946079 - Virtual master is not getting an IP address\n1946097 - [oVirt] oVirt credentials secret contains unnecessary \"ovirt_cafile\"\n1946119 - panic parsing install-config\n1946243 - No relevant error when pg limit is reached in block pools page\n1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image\n1946320 - Incorrect error message in Deployment Attach Storage Page\n1946449 - [e2e][automation] Fix cloud-init tests as UI changed\n1946458 - Edit Application action overwrites Deployment envFrom values on save\n1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. \n1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default\n1946497 - local-storage-diskmaker pod logs \"DeviceSymlinkExists\" and \"not symlinking, could not get lock: \u003cnil\u003e\"\n1946506 - [on-prem] mDNS plugin no longer needed\n1946513 - honor use specified system reserved with auto node sizing\n1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready\n1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster\n1946607 - etcd readinessProbe is not reflective of actual readiness\n1946705 - Fix issues with \"search\" capability in the Topology Quick Add component\n1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation\n1946788 - Serial tests are broken because of router\n1946790 - Marketplace operator flakes Available=False OperatorStarting during updates\n1946838 - Copied CSVs show up as adopted components\n1946839 - [Azure] While mirroring images to private registry throwing error: invalid character \u0027\u003c\u0027 looking for beginning of value\n1946865 - no \"namespace:kube_pod_container_resource_requests_cpu_cores:sum\" and \"namespace:kube_pod_container_resource_requests_memory_bytes:sum\" metrics\n1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken\n1946922 - Ingress details page doesn\u0027t show referenced secret name and link\n1946929 - the default dns operator\u0027s Progressing status is always True and cluster operator dns Progressing status is False\n1947036 - \"failed to create Matchbox client or connect\" on e2e-metal jobs or metal clusters via cluster-bot\n1947066 - machine-config-operator pod crashes when noProxy is *\n1947067 - [Installer] Pick up upstream fix for installer console output\n1947078 - Incorrect skipped status for conditional tasks in the pipeline run\n1947080 - SNO IPv6 with \u0027temporary 60-day domain\u0027 option fails with IPv4 exception\n1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1947164 - Print \"Successfully pushed\" even if the build push fails. \n1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. \n1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48)\n1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name\u0027s\n1947360 - [vSphere csi driver operator] operator pod runs as \u201cBestEffort\u201d qosClass\n1947371 - [vSphere csi driver operator] operator doesn\u0027t create \u201ccsidriver\u201d instance\n1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout\n1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot\n1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947663 - disk details are not synced in web-console\n1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin\n1947684 - MCO on SNO sometimes has rendered configs and sometimes does not\n1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. \n1947719 - 8 APIRemovedInNextReleaseInUse info alerts display\n1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods\n1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc?\n1947771 - [kube-descheduler]descheduler operator pod should not run as \u201cBestEffort\u201d qosClass\n1947774 - CSI driver operators use \"Always\" imagePullPolicy in some containers\n1947775 - [vSphere csi driver operator] doesn\u2019t use the downstream images from payload. \n1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade\n1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display\n1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display\n1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display\n1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display\n1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin\n1947828 - `download it` link should save pod log in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel is changed\n1947917 - Egress Firewall does not reliably apply firewall rules\n1947946 - Operator upgrades can delete existing CSV before completion\n1948011 - openshift-controller-manager constantly reporting type \"Upgradeable\" status Unknown\n1948012 - service-ca constantly reporting type \"Upgradeable\" status Unknown\n1948019 - [4.8] Large number of requests to the infrastructure cinder volume service\n1948022 - Some on-prem namespaces missing from must-gather\n1948040 - cluster-etcd-operator: etcd is using deprecated logger\n1948082 - Monitoring should not set Available=False with no reason on updates\n1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. \n1948232 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s maxSurge and service\u0027s ipFamilies and ipFamilyPolicy fields\n1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later\n1948359 - [aws] shared tag was not removed from user provided IAM role\n1948410 - [LSO] Local Storage Operator uses imagePullPolicy as \"Always\"\n1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn\u0027t take effective after changing\n1948427 - No action is triggered after click \u0027Continue\u0027 button on \u0027Show community Operator\u0027 windows\n1948431 - TechPreviewNoUpgrade does not enable CSI migration\n1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node\n1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge\n1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial]\n1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes\n1948513 - get-resources.sh doesn\u0027t honor the no_proxy settings\n1948524 - \u0027DeploymentUpdated\u0027 Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute\n1948546 - VM of worker is in error state when a network has port_security_enabled=False\n1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand\n1948555 - A lot of events \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\" were seen in azure disk csi driver verification test\n1948563 - End-to-End Secure boot deployment fails \"Invalid value for input variable\"\n1948582 - Need ability to specify local gateway mode in CNO config\n1948585 - Need a CI jobs to test local gateway mode with bare metal\n1948592 - [Cluster Network Operator] Missing Egress Router Controller\n1948606 - DNS e2e test fails \"[sig-arch] Only known images used by tests\" because it does not use a known image\n1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]\n1948626 - TestRouteAdmissionPolicy e2e test is failing often\n1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI\n1948634 - upgrades: allow upgrades without version change\n1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io \"cluster\" not found\n1948701 - unneeded CCO alert already covered by CVO\n1948703 - p\u0026f: probes should not get 429s\n1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows `bootstrap.ign was not found`\n1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile\n1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile\n1948711 - thanos querier and prometheus-adapter should have 2 replicas\n1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile\n1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile\n1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector\n1948719 - Machine API components should use 1.21 dependencies\n1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile\n1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1948771 - ~50% of GCP upgrade jobs in 4.8 failing with \"AggregatedAPIDown\" alert on packages.coreos.com\n1948782 - Stale references to the single-node-production-edge cluster profile\n1948787 - secret.StringData shouldn\u0027t be used for reads\n1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer\n1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page\n1948919 - Need minor update in message on channel modal\n1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region\n1948926 - Memory Usage of Dashboard \u0027Kubernetes / Compute Resources / Pod\u0027 contain wrong CPU query\n1948936 - [e2e][automation][prow] Prow script point to deleted resource\n1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer\n1948953 - Uninitialized cloud provider error when provisioning a cinder volume\n1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages\n1948966 - Add the ability to run a gather done by IO via a Kubernetes Job\n1948981 - Align dependencies and libraries with latest ironic code\n1948998 - style fixes by GoLand and golangci-lint\n1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. \n1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV\n1949022 - Openshift 4 has a zombie problem\n1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil\n1949041 - vsphere: wrong image names in bundle\n1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests (on OpenStack)\n1949050 - Bump k8s to latest 1.21\n1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv in the case of invalid NMStateConfig\n1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service\n1949075 - Extend openshift/api for Add card customization\n1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues\n1949096 - Restore private git clone tests\n1949099 - network-check-target code cleanup\n1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol\n1949145 - Move openshift-user-critical priority class to CCO\n1949155 - Console doesn\u0027t correctly check for favorited or last namespace on load if project picker used\n1949180 - Pipelines plugin model kinds aren\u0027t picked up by parser\n1949202 - sriov-network-operator not available from operatorhub on ppc64le\n1949218 - ccoctl not included in container image\n1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs\n1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors\n1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate\n1949306 - need a way to see top API accessors\n1949313 - Rename vmware-vsphere-* images to vsphere-* images before 4.8 ships\n1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring\n1949347 - apiserver-watcher support for dual-stack\n1949357 - manila-csi-controller pod not running due to secret lack(in another ns)\n1949361 - CoreDNS resolution failure for external hostnames with \"A: dns: overflow unpacking uint16\"\n1949364 - Mention scheduling profiles in scheduler operator repository\n1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1949384 - Edit Default Pull Secret modal - i18n misses\n1949387 - Fix the typo in auto node sizing script\n1949404 - label selector on pvc creation page - i18n misses\n1949410 - The referred role doesn\u0027t exist if create rolebinding from rolebinding tab of role page\n1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses\n1949413 - Automatic boot order setting is done incorrectly when using by-path style device names\n1949418 - Controller factory workers should always restart on panic()\n1949419 - oauth-apiserver logs \"[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)\"\n1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin\n1949435 - ingressclass controller doesn\u0027t recreate the openshift-default ingressclass after deleting it\n1949480 - Listeners timeout are constantly being updated\n1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages\n1949509 - Kuryr should manage API LB instead of CNO\n1949514 - URL is not visible for routes at narrow screen widths\n1949554 - Metrics of vSphere CSI driver sidecars are not collected\n1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error \"egress bandwidth restriction -1 is not equals\"\n1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing\n1949591 - Alert does not catch removed api usage during end-to-end tests. \n1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse\n1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command \u0027du\u0027\n1949626 - machine-api fails to create AWS client in new regions\n1949661 - Kubelet Workloads Management changes for OCPNODE-529\n1949664 - Spurious keepalived liveness probe failures\n1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot\n1949677 - multus is the first pod on a new node and the last to go ready\n1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace\n1949721 - Pick 99237: Use the audit ID of a request for better correlation\n1949741 - Bump golang version of cluster-machine-approver\n1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64\n1949810 - OKD 4.7 unable to access Project Topology View\n1949818 - Add e2e test to perform MCO operation Single Node OpenShift\n1949820 - Unable to use `oc adm top is` shortcut when asking for `imagestreams`\n1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand\n1949866 - The ccoctl fails to create authentication file when running the command `ccoctl aws create-identity-provider` with `--output-dir` parameter\n1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn\u0027t work\n1949882 - service-idler build error\n1949898 - Backport RP#848 to OCP 4.8\n1949907 - Gather summary of PodNetworkConnectivityChecks\n1949923 - some defined rootVolumes zones not used on installation\n1949928 - Samples Operator updates break CI tests\n1949935 - Fix incorrect access review check on start pipeline kebab action\n1949956 - kaso: add minreadyseconds to ensure we don\u0027t have an LB outage on kas\n1949967 - Update Kube dependencies in MCO to 1.21\n1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable\n1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]\n1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name\n1949991 - openshift-marketplace pods are crashlooping\n1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image\n1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy\n1950047 - CSV deployment template custom annotations are not propagated to deployments\n1950112 - SNO: machine-config pool is degraded: error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791\n1950113 - in-cluster operators need an API for additional AWS tags\n1950133 - MCO creates empty conditions on the kubeletconfig object\n1950159 - Downstream ovn-kubernetes repo should have no linter errors\n1950175 - Update Jenkins and agent base image to Go 1.16\n1950196 - ssh Key is added even with \u0027Expose SSH access to this virtual machine\u0027 unchecked\n1950210 - VPA CRDs use deprecated API version\n1950219 - KnativeServing is not shown in list on global config page\n1950232 - [Descheduler] - The minKubeVersion should be 1.21\n1950236 - Update OKD imagestreams to prefer centos7 images\n1950270 - should use \"kubernetes.io/os\" in the dns/ingresscontroller node selector description when executing oc explain command\n1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers\n1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network\n1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs\n1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build perm failing\n1950409 - Descheduler operator code and docs still reference v1beta1\n1950417 - The Marketplace Operator is building with EOL k8s versions\n1950430 - CVO serves metrics over HTTP, despite a lack of consumers\n1950460 - RFE: Change Request Size Input to Number Spinner Input\n1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap\n1950532 - Include \"update\" when referring to operator approval and channel\n1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift)\n1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff\n1950653 - BuildConfig ignores Args\n1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node\n1950908 - kube_pod_labels metric does not contain k8s labels\n1950912 - [e2e][automation] add devconsole tests\n1950916 - [RFE]console page show error when vm is poused\n1950934 - Unnecessary rollouts can happen due to unsorted endpoints\n1950935 - Updating cluster-network-operator builder \u0026 base images to be consistent with ART\n1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller\n1951007 - ovn master pod crashed\n1951029 - Drainer panics on missing context for node patch\n1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts\n1951042 - Panics every few minutes in kubelet logs post-rebase\n1951043 - Start Pipeline Modal Parameters should accept empty string defaults\n1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests\n1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud\n1951084 - avoid benign \"Path \\\"/run/secrets/etc-pki-entitlement\\\" from \\\"/etc/containers/mounts.conf\\\" doesn\u0027t exist, skipping\" messages\n1951158 - Egress Router CRD missing Addresses entry\n1951169 - Improve API Explorer discoverability from the Console\n1951174 - re-pin libvirt to 6.0.0\n1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd\u0027s size limit\n1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI\n1951212 - User/Group details shows unrelated subjects in role bindings tab\n1951214 - VM list page crashes when the volume type is sysprep\n1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions\n1951387 - opm index add doesn\u0027t respect deprecated bundles\n1951412 - Configmap gatherer can fail incorrectly\n1951456 - Docs and linting fixes\n1951486 - Replace \"kubevirt_vmi_network_traffic_bytes_total\" with new metrics names\n1951505 - Remove deprecated techPreviewUserWorkload field from CMO\u0027s configmap\n1951558 - Backport Upstream 101093 for Startup Probe Fix\n1951585 - enterprise-pod fails to build\n1951636 - assisted service operator use default serviceaccount in operator bundle\n1951637 - don\u0027t rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes\n1951639 - Bootstrap API server unclean shutdown causes reconcile delay\n1951646 - Unexpected memory climb while container not in use\n1951652 - Add retries to opm index add\n1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit\n1951671 - Excessive writes to ironic Nodes\n1951705 - kube-apiserver needs alerts on CPU utlization\n1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can\u0027t find created instance\n1951853 - dnses.operator.openshift.io resource\u0027s spec.nodePlacement.tolerations godoc incorrectly describes default behavior\n1951858 - unexpected text \u00270\u0027 on filter toolbar on RoleBinding tab\n1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator\n1951870 - sriov network resources injector: user defined injection removed existing pod annotations\n1951891 - [migration] cannot change ClusterNetwork CIDR during migration\n1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost\n1952001 - Delegated authentication: reduce the number of watch requests\n1952032 - malformatted assets in CMO\n1952045 - Mirror nfs-server image used in jenkins-e2e\n1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1952079 - rebase openshift/sdn to kube 1.21\n1952111 - Optimize importing from @patternfly/react-tokens\n1952174 - DNS operator claims to be done upgrading before it even starts\n1952179 - OpenStack Provider Ports UI Underscore Variables\n1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID \"SomeLongID\" already exists, but uses a different top layer: that ID\n1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods\n1952214 - Console Devfile Import Dev Preview broken\n1952238 - Catalog pods don\u0027t report termination logs to catalog-operator\n1952262 - Need support external gateway via hybrid overlay\n1952266 - etcd operator bumps status.version[name=operator] before operands update\n1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots\n1952282 - CSR approver races with nodelink controller and does not requeue\n1952310 - VM cannot start up if the ssh key is added by another template\n1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport\n1952333 - openshift/kubernetes vulnerable to CVE-2021-3121\n1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations\n1952367 - No VM status on overview page when VM is pending\n1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc\n1952372 - VM stop action should not be there if the VM is not running\n1952405 - console-operator is not reporting correct Available status\n1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped\n1952460 - In k8s 1.21 bump \u0027[sig-network] Firewall rule control plane should not expose well-known ports\u0027 test is disabled\n1952473 - Monitor pod placement during upgrades\n1952487 - Template filter does not work properly\n1952495 - \u201cCreate\u201d button on the Templates page is confuse\n1952527 - [Multus] multi-networkpolicy does wrong filtering\n1952545 - Selection issue when inserting YAML snippets\n1952585 - Operator links for \u0027repository\u0027 and \u0027container image\u0027 should be clickable in OperatorHub\n1952604 - Incorrect port in external loadbalancer config\n1952610 - [aws] image-registry panics when the cluster is installed in a new region\n1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances\n1952618 - 4.7.4-\u003e4.7.8 Upgrade Caused OpenShift-Apiserver Outage\n1952625 - Fix translator-reported text issues\n1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8\n1952635 - Web console displays a blank page- white space instead of cluster information\n1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory)\n1952666 - Implement Enhancement 741 for Kubelet\n1952667 - Update Readme for cluster-baremetal-operator with details about the operator\n1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client\n1952728 - It was not clear for users why Snapshot feature was not available\n1952730 - \u201cCustomize virtual machine\u201d and the \u201cAdvanced\u201d feature are confusing in wizard\n1952732 - Users did not understand the boot source labels\n1952741 - Monitoring DB: after set Time Range as Custom time range, no data display\n1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled\n1952759 - [RFE]It was not immediately clear what the Star icon meant\n1952795 - cloud-network-config-controller CRD does not specify correct plural name\n1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows\n1952820 - [LSO] Delete localvolume pv is failed\n1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud\n1952891 - Upgrade failed due to cinder csi driver not deployed\n1952904 - Linting issues in gather/clusterconfig package\n1952906 - Unit tests for configobserver.go\n1952931 - CI does not check leftover PVs\n1952958 - Runtime error loading console in Safari 13\n1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool\n1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform\n1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU\n1953077 - Handling GCP\u0027s: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource\n1953102 - kubelet CPU use during an e2e run increased 25% after rebase\n1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9\n1953169 - endpoint slice controller doesn\u0027t handle services target port correctly\n1953257 - Multiple EgressIPs per node for one namespace when \"oc get hostsubnet\"\n1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it\n1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly\n1953418 - [e2e][automation] Fix vm wizard validate tests\n1953518 - thanos-ruler pods failed to start up for \"cannot unmarshal DNS message\"\n1953530 - Fix openshift/sdn unit test flake\n1953539 - kube-storage-version-migrator: priorityClassName not set\n1953543 - (release-4.8) Add missing sample archive data\n1953551 - build failure: unexpected trampoline for shared or dynamic linking\n1953555 - GlusterFS tests fail on ipv6 clusters\n1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology\n1953670 - ironic container image build failing because esp partition size is too small\n1953680 - ipBlock ignoring all other cidr\u0027s apart from the last one specified\n1953691 - Remove unused mock\n1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console\n1953726 - Fix issues related to loading dynamic plugins\n1953729 - e2e unidling test is flaking heavily on SNO jobs\n1953795 - Ironic can\u0027t virtual media attach ISOs sourced from ingress routes\n1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS\n1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster\n1953810 - Allow use of storage policy in VMC environments\n1953830 - The oc-compliance build does not available for OCP4.8\n1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation\n1953977 - [4.8] packageserver pods restart many times on the SNO cluster\n1953979 - Ironic caching virtualmedia images results in disk space limitations\n1954003 - Alerts shouldn\u0027t report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown\n1954025 - Disk errors while scaling up a node with multipathing enabled\n1954087 - Unit tests for kube-scheduler-operator\n1954095 - Apply user defined tags in AWS Internal Registry\n1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns\n1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954248 - Disable Alertmanager Protractor e2e tests\n1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container\n1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: \"\" does not work on a upgraded cluster\n1954421 - Get \u0027Application is not available\u0027 when access Prometheus UI\n1954459 - Error: Gateway Time-out display on Alerting console\n1954460 - UI, The status of \"Used Capacity Breakdown [Pods]\" is \"Not available\"\n1954509 - FC volume is marked as unmounted after failed reconstruction\n1954540 - Lack translation for local language on pages under storage menu\n1954544 - authn operator: endpoints controller should use the context it creates\n1954554 - Add e2e tests for auto node sizing\n1954566 - Cannot update a component (`UtilizationCard`) error when switching perspectives manually\n1954597 - Default image for GCP does not support ignition V3\n1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator\n1954634 - apirequestcounts does not honor max users\n1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0\n1954640 - Support of gatherers with different periods\n1954671 - disable volume expansion support in vsphere csi driver storage class\n1954687 - localvolumediscovery and localvolumset e2es are disabled\n1954688 - LSO has missing examples for localvolumesets\n1954696 - [API-1009] apirequestcounts should indicate useragent\n1954715 - Imagestream imports become very slow when doing many in parallel\n1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace\n1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure\n1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1954783 - [aws] support byo private hosted zone\n1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage\n1954830 - verify-client-go job is failing for release-4.7 branch\n1954865 - Add necessary priority class to pod-identity-webhook deployment\n1954866 - Add necessary priority class to downloads\n1954870 - Add necessary priority class to network components\n1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by openstack. \n1954891 - Add necessary priority class to pruner\n1954892 - Add necessary priority class to ingress-canary\n1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources\n1954937 - [API-1009] `oc get apirequestcount` shows blank for column REQUESTSINCURRENTHOUR\n1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services\n1954972 - TechPreviewNoUpgrade featureset can be undone\n1954973 - \"read /proc/pressure/cpu: operation not supported\" in node-exporter logs\n1954994 - should update to 2.26.0 for prometheus resources label\n1955051 - metrics \"kube_node_status_capacity_cpu_cores\" does not exist\n1955089 - Support [sig-cli] oc observe works as expected test for IPv6\n1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display\n1955102 - Add vsphere_node_hw_version_total metric to the collected metrics\n1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1955196 - linuxptp-daemon crash on 4.8\n1955226 - operator updates apirequestcount CRD over and over\n1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing\n1955256 - stop collecting API that no longer exists\n1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts\n1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains \"google\"\n1955414 - 4.8 -\u003e 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator\n1955445 - Drop crio image metrics with high cardinality\n1955457 - Drop container_memory_failures_total metric because of high cardinality\n1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter\n1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0\n1955478 - Drop high-cardinality metrics from kube-state-metrics which aren\u0027t used\n1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation\n1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range\n1955554 - MAO does not react to events triggered from Validating Webhook Configurations\n1955589 - thanos-querier should have a PodDisruptionBudget in HA topology\n1955595 - Add DevPreviewLongLifecycle Descheduler profile\n1955596 - Pods stuck in creation phase on realtime kernel SNO\n1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing\n1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status [\u0027installing\u0027, \u0027error\u0027]\n1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta\n1955749 - OCP branded templates need to be translated\n1955761 - packageserver clusteroperator does not set reason or message for Available condition\n1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces\n1955803 - OperatorHub - console accepts any value for \"Infrastructure features\" annotation\n1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables\n1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable\n1955862 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct\n1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-\u003e Removed-\u003e Managed in configs.imageregistry with high ratio\n1955969 - Workers cannot be deployed attached to multiple networks. \n1956079 - Installer gather doesn\u0027t collect any networking information\n1956208 - Installer should validate root volume type\n1956220 - Set htt proxy system properties as expected by kubernetes-client\n1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet\n1956334 - Event Listener Details page does not show Triggers section\n1956353 - test: analyze job consistently fails\n1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate\n1956405 - Bump k8s dependencies in cluster resource override admission operator\n1956411 - Apply custom tags to AWS EBS volumes\n1956480 - [4.8] Bootimage bump tracker\n1956606 - probes FlowSchema manifest not included in any cluster profile\n1956607 - Multiple manifests lack cluster profile annotations\n1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup\n1956610 - manage-helm-repos manifest lacks cluster profile annotations\n1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string\n1956650 - The container disk URL is empty for Windows guest tools\n1956768 - aws-ebs-csi-driver-controller-metrics TargetDown\n1956826 - buildArgs does not work when the value is taken from a secret\n1956895 - Fix chatty kubelet log message\n1956898 - fix log files being overwritten on container state loss\n1956920 - can\u0027t open terminal for pods that have more than one container running\n1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false\n1956978 - Installer gather doesn\u0027t include pod names in filename\n1957039 - Physical VIP for pod -\u003e Svc -\u003e Host is incorrectly set to an IP of 169.254.169.2 for Local GW\n1957041 - Update CI e2echart with more node info\n1957127 - Delegated authentication: reduce the number of watch requests\n1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the \"tests\" image\n1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes\n1957149 - CI: \"Managed cluster should start all core operators\" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: \"volumesnapshotclass.yaml\" (string): missing dynamicClient\n1957179 - Incorrect VERSION in node_exporter\n1957190 - CI jobs failing due too many watch requests (prometheus-operator)\n1957198 - Misspelled console-operator condition\n1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap\n1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2\n1957261 - update godoc for new build status image change trigger fields\n1957295 - Apply priority classes conventions as test to openshift/origin repo\n1957315 - kuryr-controller doesn\u0027t indicate being out of quota\n1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly\n1957374 - mcddrainerr doesn\u0027t list specific pod\n1957386 - Config serve and validate command should be under alpha\n1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions\n1957502 - Infrequent panic in kube-apiserver in aws-serial job\n1957561 - lack of pseudolocalization for some text on Cluster Setting page\n1957584 - Routes are not getting created when using hostname without FQDN standard\n1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1957645 - Event \"Updated PrometheusRule.monitoring.coreos.com/v1 because it changed\" is frequently looped with weird empty {} changes\n1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP\u0027s\n1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out\n1957748 - Ptp operator pod should have CPU and memory requests set but not limits\n1957756 - Device Replacemet UI, The status of the disk is \"replacement ready\" before I clicked on \"start replacement\"\n1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1957775 - CVO creating cloud-controller-manager too early causing upgrade failures\n1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error\n1957822 - Update apiserver tlsSecurityProfile description to include Custom profile\n1957832 - CMO end-to-end tests work only on AWS\n1957856 - \u0027resource name may not be empty\u0027 is shown in CI testing\n1957869 - baremetal IPI power_interface for irmc is inconsistent\n1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects\n1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer\n1957893 - ClusterDeployment / Agent conditions show \"ClusterAlreadyInstalling\" during each spoke install\n1957895 - Cypress helper projectDropdown.shouldContain is not an assertion\n1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator\u0027s version reads\n1957926 - \"Add Capacity\" should allow to add n*3 (or n*4) local devices at once\n1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state\n1957967 - Possible test flake in listPage Cypress view\n1957972 - Leftover templates from mdns\n1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7\n1957982 - Deployment Actions clickable for view-only projects\n1957991 - ClusterOperatorDegraded can fire during installation\n1958015 - \"config-reloader-cpu\" and \"config-reloader-memory\" flags have been deprecated for prometheus-operator\n1958080 - Missing i18n for login, error and selectprovider pages\n1958094 - Audit log files are corrupted sometimes\n1958097 - don\u0027t show \"old, insecure token format\" if the token does not actually exist\n1958114 - Ignore staged vendor files in pre-commit script\n1958126 - [OVN]Egressip doesn\u0027t take effect\n1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs\n1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names\n1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs\n1958285 - Deployment considered unhealthy despite being available and at latest generation\n1958296 - OLM must explicitly alert on deprecated APIs in use\n1958329 - pick 97428: add more context to log after a request times out\n1958367 - Build metrics do not aggregate totals by build strategy\n1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton\n1958405 - etcd: current health checks and reporting are not adequate to ensure availability\n1958406 - Twistlock flags mode of /var/run/crio/crio.sock\n1958420 - openshift-install 4.7.10 fails with segmentation error\n1958424 - aws: support more auth options in manual mode\n1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View\n1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse\n1958643 - All pods creation stuck due to SR-IOV webhook timeout\n1958679 - Compression on pool can\u0027t be disabled via UI\n1958753 - VMI nic tab is not loadable\n1958759 - Pulling Insights report is missing retry logic\n1958811 - VM creation fails on API version mismatch\n1958812 - Cluster upgrade halts as machine-config-daemon fails to parse `rpm-ostree status` during cluster upgrades\n1958861 - [CCO] pod-identity-webhook certificate request failed\n1958868 - ssh copy is missing when vm is running\n1958884 - Confusing error message when volume AZ not found\n1958913 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs\n1958958 - [SCALE] segfault with ovnkube adding to address set\n1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes\n1959041 - LSO Cluster UI,\"Troubleshoot\" link does not exist after scale down osd pod\n1959058 - ovn-kubernetes has lock contention on the LSP cache\n1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change\n1959177 - Descheduler dev manifests are missing permissions\n1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload\n1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates\n1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring\n1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check\n1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system\n1959406 - Difficult to debug performance on ovn-k without pprof enabled\n1959471 - Kube sysctl conformance tests are disabled, meaning we can\u0027t submit conformance results\n1959479 - machines doesn\u0027t support dual-stack loadbalancers on Azure\n1959513 - Cluster-kube-apiserver does not use library-go for audit pkg\n1959519 - Operand details page only renders one status donut no matter how many \u0027podStatuses\u0027 descriptors are used\n1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console\n1959564 - Test verify /run filesystem contents failing\n1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot\n1959650 - Gather SDI-related MachineConfigs\n1959658 - showing a lot \"constructing many client instances from the same exec auth config\"\n1959696 - Deprecate \u0027ConsoleConfigRoute\u0027 struct in console-operator config\n1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO\n1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode\n1959711 - Egressnetworkpolicy doesn\u0027t work when configure the EgressIP\n1959786 - [dualstack]EgressIP doesn\u0027t work on dualstack cluster for IPv6\n1959916 - Console not works well against a proxy in front of openshift clusters\n1959920 - UEFISecureBoot set not on the right master node\n1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: []\n1960035 - iptables is missing from ose-keepalived-ipfailover image\n1960059 - Remove \"Grafana UI\" link from Console Monitoring \u003e Dashboards page\n1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions\n1960129 - [e2e][automation] add smoke tests about VM pages and actions\n1960134 - some origin images are not public\n1960171 - Enable SNO checks for image-registry\n1960176 - CCO should recreate a user for the component when it was removed from the cloud providers\n1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled\n1960255 - fixed obfuscation permissions\n1960257 - breaking changes in pr template\n1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost\n1960323 - Address issues raised by coverity security scan\n1960324 - manifests: extra \"spec.version\" in console quickstarts makes CVO hotloop\n1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960339 - manifests: unset \"preemptionPolicy\" makes CVO hotloop\n1960531 - Items under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027 keep added for every access\n1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana\n1960546 - Add virt_platform metric to the collected metrics\n1960554 - Remove rbacv1beta1 handling code\n1960612 - Node disk info in overview/details does not account for second drive where /var is located\n1960619 - Image registry integration tests use old-style OAuth tokens\n1960683 - GlobalConfigPage is constantly requesting resources\n1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces\n1960716 - Missing details for debugging\n1960732 - Outdated manifests directory in CSI driver operator repositories\n1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master\n1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be \"the newest\"\n1960767 - /metrics endpoint of the Grafana UI is accessible without authentication\n1960780 - CI: failed to create PDB \"service-test\" the server could not find the requested resource\n1961064 - Documentation link to network policies is outdated\n1961067 - Improve log gathering logic\n1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs\n1961091 - Gather MachineHealthCheck definitions\n1961120 - CSI driver operators fail when upgrading a cluster\n1961173 - recreate existing static pod manifests instead of updating\n1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing\n1961314 - Race condition in operator-registry pull retry unit tests\n1961320 - CatalogSource does not emit any metrics to indicate if it\u0027s ready or not\n1961336 - Devfile sample for BuildConfig is not defined\n1961356 - Update single quotes to double quotes in string\n1961363 - Minor string update for \" No Storage classes found in cluster, adding source is disabled.\"\n1961393 - DetailsPage does not work with group~version~kind\n1961452 - Remove \"Alertmanager UI\" link from Console Monitoring \u003e Alerting page\n1961466 - Some dropdown placeholder text on route creation page is not translated\n1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true\n1961506 - NodePorts do not work on RHEL 7.9 workers (was \"4.7 -\u003e 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers\")\n1961536 - clusterdeployment without pull secret is crashing assisted service pod\n1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop\n1961545 - Fixing Documentation Generation\n1961550 - HAproxy pod logs showing error \"another server named \u0027pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080\u0027 was already defined at line 326, please use distinct names\"\n1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig\n1961561 - The encryption controllers send lots of request to an API server\n1961582 - Build failure on s390x\n1961644 - NodeAuthenticator tests are failing in IPv6\n1961656 - driver-toolkit missing some release metadata\n1961675 - Kebab menu of taskrun contains Edit options which should not be present\n1961701 - Enhance gathering of events\n1961717 - Update runtime dependencies to Wallaby builds for bugfixes\n1961829 - Quick starts prereqs not shown when description is long\n1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy\n1961878 - Add Sprint 199 translations\n1961897 - Remove history listener before console UI is unmounted\n1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes\n1962062 - Monitoring dashboards should support default values of \"All\"\n1962074 - SNO:the pod get stuck in CreateContainerError and prompt \"failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable\" after adding a performanceprofile\n1962095 - Replace gather-job image without FQDN\n1962153 - VolumeSnapshot routes are ambiguous, too generic\n1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime\n1962219 - NTO relies on unreliable leader-for-life implementation. \n1962256 - use RHEL8 as the vm-example\n1962261 - Monitoring components requesting more memory than they use\n1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster\n1962347 - Cluster does not exist logs after successful installation\n1962392 - After upgrade from 4.5.16 to 4.6.17, customer\u0027s application is seeing re-transmits\n1962415 - duplicate zone information for in-tree PV after enabling migration\n1962429 - Cannot create windows vm because kubemacpool.io denied the request\n1962525 - [Migration] SDN migration stuck on MCO on RHV cluster\n1962569 - NetworkPolicy details page should also show Egress rules\n1962592 - Worker nodes restarting during OS installation\n1962602 - Cloud credential operator scrolls info \"unable to provide upcoming...\" on unsupported platform\n1962630 - NTO: Ship the current upstream TuneD\n1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root\n1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace\n1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint\n1962740 - Add documentation to Egress Router\n1962850 - [4.8] Bootimage bump tracker\n1962882 - Version pod does not set priorityClassName\n1962905 - Ramdisk ISO source defaulting to \"http\" breaks deployment on a good amount of BMCs\n1963068 - ironic container should not specify the entrypoint\n1963079 - KCM/KS: ability to enforce localhost communication with the API server. \n1963154 - Current BMAC reconcile flow skips Ironic\u0027s deprovision step\n1963159 - Add Sprint 200 translations\n1963204 - Update to 8.4 IPA images\n1963205 - Installer is using old redirector\n1963208 - Translation typos/inconsistencies for Sprint 200 files\n1963209 - Some strings in public.json have errors\n1963211 - Fix grammar issue in kubevirt-plugin.json string\n1963213 - Memsource download script running into API error\n1963219 - ImageStreamTags not internationalized\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n1963267 - Warning: Invalid DOM property `classname`. Did you mean `className`? console warnings in volumes table\n1963502 - create template from is not descriptive\n1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too\n1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault\n1963848 - Use OS-shipped stalld vs. the NTO-shipped one. \n1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies\n1963871 - cluster-etcd-operator:[build] upgrade to go 1.16\n1963896 - The VM disks table does not show easy links to PVCs\n1963912 - \"[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}\" failures on vsphere\n1963932 - Installation failures in bootstrap in OpenStack release jobs\n1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail\n1964059 - rebase openshift/sdn to kube 1.21.1\n1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration\n1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to \"Unknown provider baremetal\"\n1964243 - The `oc compliance fetch-raw` doesn\u2019t work for disconnected cluster\n1964270 - Failed to install \u0027cluster-kube-descheduler-operator\u0027 with error: \"clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\\\": must be no more than 63 characters\"\n1964319 - Network policy \"deny all\" interpreted as \"allow all\" in description page\n1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured\n1964472 - Make project and namespace requirements more visible rather than giving me an error after submission\n1964486 - Bulk adding of CIDR IPS to whitelist is not working\n1964492 - Pick 102171: Implement support for watch initialization in P\u0026F\n1964625 - NETID duplicate check is only required in NetworkPolicy Mode\n1964748 - Sync upstream 1.7.2 downstream\n1964756 - PVC status is always in \u0027Bound\u0027 status when it is actually cloning\n1964847 - Sanity check test suite missing from the repo\n1964888 - opoenshift-apiserver imagestreamimports depend on \u003e34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n1964936 - error log for \"oc adm catalog mirror\" is not correct\n1964979 - Add mapping from ACI to infraenv to handle creation order issues\n1964997 - Helm Library charts are showing and can be installed from Catalog\n1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots\n1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation\n1965283 - 4.7-\u003e4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData:\n1965330 - oc image extract fails due to security capabilities on files\n1965334 - opm index add fails during image extraction\n1965367 - Typo in in etcd-metric-serving-ca resource name\n1965370 - \"Route\" is not translated in Korean or Chinese\n1965391 - When storage class is already present wizard do not jumps to \"Stoarge and nodes\"\n1965422 - runc is missing Provides oci-runtime in rpm spec\n1965522 - [v2v] Multiple typos on VM Import screen\n1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists\n1965909 - Replace \"Enable Taint Nodes\" by \"Mark nodes as dedicated\"\n1965921 - [oVirt] High performance VMs shouldn\u0027t be created with Existing policy\n1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request\n1966077 - `hidden` descriptor is visible in the Operator instance details page`\n1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11\n1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality\n1966138 - (release-4.8) Update K8s \u0026 OpenShift API versions\n1966156 - Issue with Internal Registry CA on the service pod\n1966174 - No storage class is installed, OCS and CNV installations fail\n1966268 - Workaround for Network Manager not supporting nmconnections priority\n1966401 - Revamp Ceph Table in Install Wizard flow\n1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert\n1966416 - (release-4.8) Do not exceed the data size limit\n1966459 - \u0027policy/v1beta1 PodDisruptionBudget\u0027 and \u0027batch/v1beta1 CronJob\u0027 appear in image-registry-operator log\n1966487 - IP address in Pods list table are showing node IP other than pod IP\n1966520 - Add button from ocs add capacity should not be enabled if there are no PV\u0027s\n1966523 - (release-4.8) Gather MachineAutoScaler definitions\n1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed\n1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug\n1966602 - don\u0027t require manually setting IPv6DualStack feature gate in 4.8\n1966620 - The bundle.Dockerfile in the repo is obsolete\n1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1966654 - Alertmanager PDB is not created, but Prometheus UWM is\n1966672 - Add Sprint 201 translations\n1966675 - Admin console string updates\n1966677 - Change comma to semicolon\n1966683 - Translation bugs from Sprint 201 files\n1966684 - Verify \"Creating snapshot for claim \u003c1\u003e{pvcName}\u003c/1\u003e\" displays correctly\n1966697 - Garbage collector logs every interval - move to debug level\n1966717 - include full timestamps in the logs\n1966759 - Enable downstream plugin for Operator SDK\n1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version\n1966813 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1\n1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkub[e\"\n1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings \"ipv6.dhcp-duid=ll\" missing from dual stack install\n1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image\n1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored\n1967197 - 404 errors loading some i18n namespaces\n1967207 - Getting started card: console customization resources link shows other resources\n1967208 - Getting started card should use semver library for parsing the version instead of string manipulation\n1967234 - Console is continuously polling for ConsoleLink acm-link\n1967275 - Awkward wrapping in getting started dashboard card\n1967276 - Help menu tooltip overlays dropdown\n1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check\n1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit\n1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests\n1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small\n1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit\n1967595 - Fixes the remaining lint issues\n1967614 - prometheus-k8s pods can\u0027t be scheduled due to volume node affinity conflict\n1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn\u0027t work if ovirt-config.yaml doesn\u0027t exist and user should fill the FQDN URL\n1967625 - Add OpenShift Dockerfile for cloud-provider-aws\n1967631 - [4.8.0] Cluster install failed due to timeout while \"Waiting for control plane\"\n1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkube\"\n1967639 - Console whitescreens if user preferences fail to load\n1967662 - machine-api-operator should not use deprecated \"platform\" field in infrastructures.config.openshift.io\n1967667 - Add Sprint 202 Round 1 translations\n1967713 - Insights widget shows invalid link to the OCM\n1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming\n1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than \"NoExecute\"\n1967803 - should update to 7.5.5 for grafana resources version label\n1967832 - Add more tests for periodic.go\n1967833 - Add tasks pool to tasks_processing\n1967842 - Production logs are spammed on \"OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS\"\n1967843 - Fix null reference to messagesToSearch in gather_logs.go\n1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring\n1967933 - Network-Tools debug scripts not working as expected\n1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: \"mkdir: cannot create directory \u0027/var/lib/pgsql/data/userdata\u0027: Permission denied\"\n1968019 - drain timeout and pool degrading period is too short\n1968067 - [master] Agent validation not including reason for being insufficient\n1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed\n1968175 - [4.8.0] Agent validation not including reason for being insufficient\n1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration\n1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn\u0027t be required\n1968435 - [4.8.0] Unclear message in case of missing clusterImageSet\n1968436 - Listeners timeout updated to remain using default value\n1968449 - [4.8.0] Wrong Install-config override documentation\n1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters\n1968452 - [4.8.0] [doc] \"Mirror Registry Configuration\" doc section needs clarification of functionality and limitations\n1968454 - [4.8.0] backend events generated with wrong namespace for agent\n1968455 - [4.8.0] Assisted Service operator\u0027s controllers are starting before the base service is ready\n1968515 - oc should set user-agent when talking with registry\n1968531 - Sync upstream 1.8.0 downstream\n1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn\u0027t clean up properly\n1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted\n1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox\n1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil\n1968701 - Bare metal IPI installation is failed due to worker inspection failure\n1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed\n1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning\n1969284 - Console Query Browser: Can\u0027t reset zoom to fixed time range after dragging to zoom\n1969315 - [4.8.0] BMAC doesn\u0027t check if ISO Url changed before queuing BMH for reconcile\n1969352 - [4.8.0] Creating BareMetalHost without the \"inspect.metal3.io\" does not automatically add it\n1969363 - [4.8.0] Infra env should show the time that ISO was generated. \n1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it\n1969386 - Filesystem\u0027s Utilization doesn\u0027t show in VM overview tab\n1969397 - OVN bug causing subports to stay DOWN fails installations\n1969470 - [4.8.0] Misleading error in case of install-config override bad input\n1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step\n1969525 - Replace golint with revive\n1969535 - Topology edit icon does not link correctly when branch name contains slash\n1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it\n1969551 - [4.8.0] Assisted service times out on GetNextSteps due to `oc adm release info` taking too long\n1969561 - Test \"an end user can use OLM can subscribe to the operator\" generates deprecation alert\n1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire\n1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io\n1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1\n1969626 - Portfoward stream cleanup can cause kubelet to panic\n1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out\n1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check\n1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso\n1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups\n1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml\n1969784 - WebTerminal widget should send resize events\n1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails\n1969891 - Fix rotated pipelinerun status icon issue in safari\n1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse\n1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available\n1969951 - Cluster local doesn\u0027t work for knative services created from dev console\n1969969 - ironic-rhcos-downloader container uses and old base image\n1970062 - ccoctl does not work with STS authentication\n1970068 - ovnkube-master logs \"Failed to find node ips for gateway\" error\n1970126 - [4.8.0] Disable \"metrics-events\" when deploying using the operator\n1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change\n1970262 - [4.8.0] Remove Agent CRD Status fields not needed\n1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs\n1970269 - [4.8.0] missing role in agent CRD\n1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs\n1970381 - Monitoring dashboards: Custom time range inputs should retain their values\n1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed\n1970401 - [4.8.0] AgentLabelSelector is required yet not supported\n1970415 - SR-IOV Docs needs documentation for disabling port security on a network\n1970470 - Add pipeline annotation to Secrets which are created for a private repo\n1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod\n1970624 - 4.7-\u003e4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io\n1970828 - \"500 Internal Error\" for all openshift-monitoring routes\n1970975 - 4.7 -\u003e 4.8 upgrades on AWS take longer than expected\n1971068 - Removing invalid AWS instances from the CF templates\n1971080 - 4.7-\u003e4.8 CI: KubePodNotReady due to MCD\u0027s 5m sleep between drain attempts\n1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 !\n1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces\n1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing \"Validated\" condition about VIP not matching machine network\n1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn\u0027t work - clusteroperator/kube-apiserver is not upgradeable\n1971589 - [4.8.0] Telemetry-client won\u0027t report metrics in case the cluster was installed using the assisted operator\n1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service\n1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery\n1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409)\n1971739 - Keep /boot RW when kdump is enabled\n1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly\n1972128 - ironic-static-ip-manager container still uses 4.7 base image\n1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are\n1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster\n1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1972262 - [4.8.0] \"baremetalhost.metal3.io/detached\" uses boolean value where string is expected\n1972426 - Adopt failure can trigger deprovisioning\n1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage\n1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration\n1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall\n1972565 - performance issues due to lost node, pods taking too long to relaunch\n1972662 - DPDK KNI modules need some additional tools\n1972676 - Requirements for authenticating kernel modules with X.509\n1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings\n1972690 - [4.8.0] infra-env condition message isn\u0027t informative in case of missing pull secret\n1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration\n1972768 - kube-apiserver setup fail while installing SNO due to port being used\n1972864 - New `local-with-fallback` service annotation does not preserve source IP\n1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8\n1973117 - No storage class is installed, OCS and CNV installations fail\n1973233 - remove kubevirt images and references\n1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. \n1973428 - Placeholder bug for OCP 4.8.0 image release\n1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped\n1973672 - fix ovn-kubernetes NetworkPolicy 4.7-\u003e4.8 upgrade issue\n1973995 - [Feature:IPv6DualStack] tests are failing in dualstack\n1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings\n1974447 - Requirements for nvidia GPU driver container for driver toolkit\n1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. \n1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel\n1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion\n1974746 - [4.8.0] File system usage not being logged appropriately\n1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. \n1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster\n1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string\n1974850 - [4.8] coreos-installer failing Execshield\n1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift\n1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing\n1975155 - Kubernetes service IP cannot be accessed for rhel worker\n1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types\n1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData\n1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified\n1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve\n1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn\n1975672 - [4.8.0] Production logs are spammed on \"Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient\"\n1975789 - worker nodes rebooted when we simulate a case where the api-server is down\n1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s]\n1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn\u0027t work - ingresscontroller \"default\" is degraded\n1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts\n1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO\n1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1\n1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO\n1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller\n1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes\n1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses\n1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8\n1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod\n1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used\n1980788 - NTO-shipped stalld can segfault\n1981633 - enhance service-ca injection\n1982250 - Performance Addon Operator fails to install after catalog source becomes ready\n1982252 - olm Operator is in CrashLoopBackOff state with error \"couldn\u0027t cleanup cross-namespace ownerreferences\"\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2016-2183\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-15106\nhttps://access.redhat.com/security/cve/CVE-2020-15112\nhttps://access.redhat.com/security/cve/CVE-2020-15113\nhttps://access.redhat.com/security/cve/CVE-2020-15114\nhttps://access.redhat.com/security/cve/CVE-2020-15136\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-26541\nhttps://access.redhat.com/security/cve/CVE-2020-28469\nhttps://access.redhat.com/security/cve/CVE-2020-28500\nhttps://access.redhat.com/security/cve/CVE-2020-28852\nhttps://access.redhat.com/security/cve/CVE-2021-3114\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3636\nhttps://access.redhat.com/security/cve/CVE-2021-20206\nhttps://access.redhat.com/security/cve/CVE-2021-20271\nhttps://access.redhat.com/security/cve/CVE-2021-20291\nhttps://access.redhat.com/security/cve/CVE-2021-21419\nhttps://access.redhat.com/security/cve/CVE-2021-21623\nhttps://access.redhat.com/security/cve/CVE-2021-21639\nhttps://access.redhat.com/security/cve/CVE-2021-21640\nhttps://access.redhat.com/security/cve/CVE-2021-21648\nhttps://access.redhat.com/security/cve/CVE-2021-22133\nhttps://access.redhat.com/security/cve/CVE-2021-23337\nhttps://access.redhat.com/security/cve/CVE-2021-23362\nhttps://access.redhat.com/security/cve/CVE-2021-23368\nhttps://access.redhat.com/security/cve/CVE-2021-23382\nhttps://access.redhat.com/security/cve/CVE-2021-25735\nhttps://access.redhat.com/security/cve/CVE-2021-25737\nhttps://access.redhat.com/security/cve/CVE-2021-26539\nhttps://access.redhat.com/security/cve/CVE-2021-26540\nhttps://access.redhat.com/security/cve/CVE-2021-27292\nhttps://access.redhat.com/security/cve/CVE-2021-28092\nhttps://access.redhat.com/security/cve/CVE-2021-29059\nhttps://access.redhat.com/security/cve/CVE-2021-29622\nhttps://access.redhat.com/security/cve/CVE-2021-32399\nhttps://access.redhat.com/security/cve/CVE-2021-33034\nhttps://access.redhat.com/security/cve/CVE-2021-33194\nhttps://access.redhat.com/security/cve/CVE-2021-33909\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ\nd4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26\nZ7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs\nImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h\ngep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8\nTWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX\nX//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb\n1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA\nmtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/\n0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6\ndpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx\n+bLvJlpGiMg=\n=NJcM\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. VDSM manages and monitors the host\u0027s storage, memory and\nnetworks as well as virtual machine creation, other host administration\ntasks, statistics gathering, and log collection. \n\nBug Fix(es):\n\n* An update in libvirt has changed the way block threshold events are\nsubmitted. \nAs a result, the VDSM was confused by the libvirt event, and tried to look\nup a drive, logging a warning about a missing drive. \nIn this release, the VDSM has been adapted to handle the new libvirt\nbehavior, and does not log warnings about missing drives. (BZ#1948177)\n\n* Previously, when a virtual machine was powered off on the source host of\na live migration and the migration finished successfully at the same time,\nthe two events interfered with each other, and sometimes prevented\nmigration cleanup resulting in additional migrations from the host being\nblocked. \nIn this release, additional migrations are not blocked. (BZ#1959436)\n\n* Previously, when failing to execute a snapshot and re-executing it later,\nthe second try would fail due to using the previous execution data. In this\nrelease, this data will be used only when needed, in recovery mode. \n(BZ#1984209)\n\n4. Then engine deletes the volume and causes data corruption. \n1998017 - Keep cinbderlib dependencies optional for 4.4.8\n\n6. \n\nBug Fix(es):\n\n* Documentation is referencing deprecated API for Service Export -\nSubmariner (BZ#1936528)\n\n* Importing of cluster fails due to error/typo in generated command\n(BZ#1936642)\n\n* RHACM 2.2.2 images (BZ#1938215)\n\n* 2.2 clusterlifecycle fails to allow provision `fips: true` clusters on\naws, vsphere (BZ#1941778)\n\n3. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API", "sources": [ { "db": "NVD", "id": "CVE-2021-23337" }, { "db": "JVNDB", "id": "JVNDB-2021-001309" }, { "db": "VULHUB", "id": "VHN-381798" }, { "db": "VULMON", "id": "CVE-2021-23337" }, { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-23337", "trust": 4.1 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 1.4 }, { "db": "PACKETSTORM", "id": "162901", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162151", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU99475301", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2021-001309", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "163690", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "164090", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2021.1225", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1871", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5790", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3036", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2232", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2182", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2555", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2657", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4568", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2555", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5150", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072040", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021062703", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021051230", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022012753", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022011901", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022052615", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021090922", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202102-1137", "trust": 0.6 }, { "db": "VULHUB", "id": "VHN-381798", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-23337", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163276", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168352", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-381798" }, { "db": "VULMON", "id": "CVE-2021-23337" }, { "db": "JVNDB", "id": "JVNDB-2021-001309" }, { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "NVD", "id": "CVE-2021-23337" }, { "db": "CNNVD", "id": "CNNVD-202102-1137" } ] }, "id": "VAR-202102-1466", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-381798" } ], "trust": 0.30766129 }, "last_update_date": "2023-12-18T10:45:22.903000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "NTAP-20210312-0006", "trust": 0.8, "url": "https://security.netapp.com/advisory/ntap-20210312-0006/" }, { "title": "IBM: Security Bulletin: IBM App Connect Enterprise Certified Container may be vulnerable to a command injection vulnerability (CVE-2021-23337)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=a6ab32faf6383cb0cedc0fcc02621330" }, { "title": "Debian CVElist Bug Report Logs: CVE-2021-23337 CVE-2020-28500", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=705b23b69122ed473c796891371a9f52" }, { "title": "IBM: Security Bulletin: A security vulnerability in Node.js lodash module affects IBM Cloud Pak for Multicloud Management Managed Service", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=be717afa91143ef04a4f0fde16d094de" }, { "title": "IBM: Security Bulletin: IBM Watson OpenScale on Cloud Pak for Data is impacted by Vulnerabilities in Node.js", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3a6796f7c08575af6f64adb2d3b31adb" }, { "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory" }, { "title": "blank", "trust": 0.1, "url": "https://github.com/cduplantis/blank " }, { "title": "Example.EWA.TypeScript.WebApplication", "trust": 0.1, "url": "https://github.com/refinitiv-api-samples/example.ewa.typescript.webapplication " }, { "title": "loginServer", "trust": 0.1, "url": "https://github.com/did-create-board/loginserver " } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-23337" }, { "db": "JVNDB", "id": "JVNDB-2021-001309" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-94", "trust": 1.1 }, { "problemtype": "Command injection (CWE-77) [NVD evaluation ]", "trust": 0.8 }, { "problemtype": "CWE-77", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-381798" }, { "db": "JVNDB", "id": "JVNDB-2021-001309" }, { "db": "NVD", "id": "CVE-2021-23337" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.3, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23337" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20210312-0006/" }, { "trust": 1.7, "url": "https://github.com/lodash/lodash/blob/ddfd9b11a0126db2302cb70ec9973b66baec0975/lodash.js%23l14851" }, { "trust": 1.7, "url": "https://snyk.io/vuln/snyk-java-orgfujionwebjars-1074932" }, { "trust": 1.7, "url": "https://snyk.io/vuln/snyk-java-orgwebjars-1074930" }, { "trust": 1.7, "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbower-1074928" }, { "trust": 1.7, "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbowergithublodash-1074931" }, { "trust": 1.7, "url": "https://snyk.io/vuln/snyk-java-orgwebjarsnpm-1074929" }, { "trust": 1.7, "url": "https://snyk.io/vuln/snyk-js-lodash-1040724" }, { "trust": 1.7, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpujan2022.html" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu99475301/" }, { "trust": 0.8, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-automation-manager/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-watson-discovery-for-ibm-cloud-pak-for-data-affected-by-vulnerability-in-node-js-3/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2657" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1225" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162901/red-hat-security-advisory-2021-2179-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-insights-is-affected-by-multiple-vulnerabilities-5/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-integration-bus-ibm-app-connect-enterprise-v11-are-affected-by-vulnerabilities-in-node-js-cve-2021-23337/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-potential-vulnerability-with-node-js-lodash-module-3/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022012753" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164090/red-hat-security-advisory-2021-3459-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6494365" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1871" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6493751" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022011901" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3036" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021090922" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2555" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-pak-for-multicloud-management-managed-service-2/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022052615" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-automation-manager-3/" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6486333" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6524656" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162151/red-hat-security-advisory-2021-1168-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072040" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021062703" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021051230" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-cloud-pak-for-integration-is-vulnerable-to-node-js-lodash-vulnerability-cve-2021-23337/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-watson-openscale-on-cloud-pak-for-data-is-impacted-by-vulnerabilities-in-node-js/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2232" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163690/red-hat-security-advisory-2021-2438-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5150" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2555" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2182" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5790" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-app-connect-enterprise-certified-container-may-be-vulnerable-to-a-command-injection-vulnerability-cve-2021-23337/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4568" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3114" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/2974891" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23336" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13949" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhb" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26116" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27619" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2543" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24977" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-3842" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13776" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23336" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3177" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13949" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2179" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/technical_notes" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21419" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15112" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25737" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21639" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20291" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26540" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21419" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33194" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26539" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15106" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29059" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-2183" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26160" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21623" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2438" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15112" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20206" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20206" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22133" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15113" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21640" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26160" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21640" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2437" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15136" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23382" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21639" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21648" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15106" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15136" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29622" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20291" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15113" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15114" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22133" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-2183" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15114" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3636" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3459" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1168" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29529" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27363" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29529" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3347" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28374" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27364" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26708" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27365" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27152" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27363" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21322" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3347" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14040" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27365" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0466" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27364" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28374" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26708" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8559" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2526" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0691" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0686" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32206" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32208" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0639" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6429" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30631" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16845" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0512" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13435" } ], "sources": [ { "db": "VULHUB", "id": "VHN-381798" }, { "db": "JVNDB", "id": "JVNDB-2021-001309" }, { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "NVD", "id": "CVE-2021-23337" }, { "db": "CNNVD", "id": "CNNVD-202102-1137" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-381798" }, { "db": "VULMON", "id": "CVE-2021-23337" }, { "db": "JVNDB", "id": "JVNDB-2021-001309" }, { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "NVD", "id": "CVE-2021-23337" }, { "db": "CNNVD", "id": "CNNVD-202102-1137" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-02-15T00:00:00", "db": "VULHUB", "id": "VHN-381798" }, { "date": "2021-02-15T00:00:00", "db": "VULMON", "id": "CVE-2021-23337" }, { "date": "2021-04-05T00:00:00", "db": "JVNDB", "id": "JVNDB-2021-001309" }, { "date": "2021-06-24T17:54:53", "db": "PACKETSTORM", "id": "163276" }, { "date": "2021-06-01T15:17:45", "db": "PACKETSTORM", "id": "162901" }, { "date": "2021-07-28T14:53:49", "db": "PACKETSTORM", "id": "163690" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2021-09-09T13:33:33", "db": "PACKETSTORM", "id": "164090" }, { "date": "2021-04-13T15:38:30", "db": "PACKETSTORM", "id": "162151" }, { "date": "2022-09-13T15:42:14", "db": "PACKETSTORM", "id": "168352" }, { "date": "2021-02-15T13:15:12.560000", "db": "NVD", "id": "CVE-2021-23337" }, { "date": "2021-02-15T00:00:00", "db": "CNNVD", "id": "CNNVD-202102-1137" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-13T00:00:00", "db": "VULHUB", "id": "VHN-381798" }, { "date": "2022-09-13T00:00:00", "db": "VULMON", "id": "CVE-2021-23337" }, { "date": "2022-09-20T06:02:00", "db": "JVNDB", "id": "JVNDB-2021-001309" }, { "date": "2022-09-13T21:25:02.093000", "db": "NVD", "id": "CVE-2021-23337" }, { "date": "2022-11-11T00:00:00", "db": "CNNVD", "id": "CNNVD-202102-1137" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "163690" }, { "db": "CNNVD", "id": "CNNVD-202102-1137" } ], "trust": 0.7 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Lodash\u00a0 Command injection vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-001309" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code injection", "sources": [ { "db": "CNNVD", "id": "CNNVD-202102-1137" } ], "trust": 0.6 } }
var-202201-0429
Vulnerability from variot
follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor. Bugs fixed (https://bugzilla.redhat.com/):
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
2032128 - Observability - dashboard name contains /
would cause error when generating dashboard cm
2033051 - ACM application placement fails after renaming the application name
2039197 - disable the obs metric collect should not impact the managed cluster upgrade
2039820 - Observability - cluster list should only contain OCP311 cluster on OCP311 dashboard
2042223 - the value of name label changed from clusterclaim name to cluster name
2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management
2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2048500 - VMWare Cluster creation does not accept ecdsa-sha2-nistp521 ssh keys
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account
2053211 - clusterSelector matchLabels spec are cleared when changing app name/namespace during creating an app in UI
2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak
2053279 - Application cluster status is not updated in UI after restoring
2056610 - OpenStack cluster creation is using deprecated floating IP config for 4.7+
2057249 - RHACM 2.4.3 images
2059039 - The value of Vendor reported by cluster metrics was Other even if the vendor label in managedcluster was Openshift
2059954 - Subscriptions stop reconciling after channel secrets are recreated
2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates
2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server
2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path
2074156 - Placementrule is not reconciling on a new fresh environment
2074543 - The cluster claimed from clusterpool can not auto imported
- Summary:
Red Hat Advanced Cluster Management for Kubernetes 2.3.6 General Availability release images, which provide security updates and bug fixes. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/
Security updates:
-
Nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)
-
Nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
Golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
Follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor (CVE-2022-0155)
Bug fixes:
-
Inform ACM policy is not checking properly the node fields (BZ# 2015588)
-
ImagePullPolicy is "Always" for multicluster-operators-subscription-rhel8 image (BZ# 2021128)
-
Traceback blocks reconciliation of helm repository hosted on AWS S3 storage (BZ# 2021576)
-
RHACM 2.3.6 images (BZ# 2029507)
-
Console UI enabled SNO UI Options not displayed during cluster creating (BZ# 2030002)
-
Grc pod restarts for each new GET request to the Governance Policy Page (BZ# 2037351)
-
Clustersets do not appear in UI (BZ# 2049810)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2015588 - Inform ACM policy is not checking properly the node fields 2021128 - imagePullPolicy is "Always" for multicluster-operators-subscription-rhel8 image 2021576 - traceback blocks reconciliation of helm repository hosted on AWS S3 storage 2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability 2029507 - RHACM 2.3.6 images 2030002 - Console UI enabled SNO UI Options not displayed during cluster creating 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2037351 - grc pod restarts for each new GET request to the Governance Policy Page 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2049810 - Clustersets do not appear in UI 2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.3] bug fix and security update Advisory ID: RHSA-2022:8502-01 Product: Red Hat Virtualization Advisory URL: https://access.redhat.com/errata/RHSA-2022:8502 Issue date: 2022-11-16 CVE Names: CVE-2022-0155 CVE-2022-2805 ==================================================================== 1. Summary:
Updated ovirt-engine packages that fix several bugs and add various enhancements are now available.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch
- Description:
The ovirt-engine package provides the Red Hat Virtualization Manager, a centralized management platform that allows system administrators to view and manage virtual machines. The Manager provides a comprehensive range of features including search capabilities, resource management, live migrations, and virtual infrastructure provisioning.
Bug Fix(es):
-
Ghost OVFs are written when using floating SD to migrate VMs between 2 RHV environments. (BZ#1705338)
-
RHV engine is reporting a delete disk with wipe as completing successfully when it actually fails from a timeout. (BZ#1836318)
-
[DR] Failover / Failback HA VM Fails to be started due to 'VM XXX is being imported' (BZ#1968433)
-
Virtual Machine with lease fails to run on DR failover (BZ#1974535)
-
Disk is missing after importing VM from Storage Domain that was detached from another DC. (BZ#1983567)
-
Unable to switch RHV host into maintenance mode as there are image transfer in progress (BZ#2123141)
-
not able to import disk in 4.5.2 (BZ#2134549)
Enhancement(s):
-
[RFE] Show last events for user VMs (BZ#1886211)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/2974891
- Bugs fixed (https://bugzilla.redhat.com/):
1705338 - Ghost OVFs are written when using floating SD to migrate VMs between 2 RHV environments. 1836318 - RHV engine is reporting a delete disk with wipe as completing successfully when it actually fails from a timeout. 1886211 - [RFE] Show last events for user VMs 1968433 - [DR] Failover / Failback HA VM Fails to be started due to 'VM XXX is being imported' 1974535 - Virtual Machine with lease fails to run on DR failover 1983567 - Disk is missing after importing VM from Storage Domain that was detached from another DC. 2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor 2079545 - CVE-2022-2805 ovirt-engine: RHVM admin password is logged unfiltered when using otopi-style 2118672 - Use rpm instead of auto in package_facts ansible module to prevent mistakes of determining the correct package manager inside package_facts module 2123141 - Unable to switch RHV host into maintenance mode as there are image transfer in progress 2127836 - Create template dialog is not closed when clicking in OK and the template is not created 2134549 - not able to import disk in 4.5.2 2137207 - The RemoveDisk job finishes before the disk was removed from the DB
- Package List:
RHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:
Source: ovirt-engine-4.5.3.2-1.el8ev.src.rpm ovirt-engine-dwh-4.5.7-1.el8ev.src.rpm ovirt-engine-ui-extensions-1.3.6-1.el8ev.src.rpm ovirt-web-ui-1.9.2-1.el8ev.src.rpm
noarch: ovirt-engine-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-backend-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-dbscripts-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-dwh-4.5.7-1.el8ev.noarch.rpm ovirt-engine-dwh-grafana-integration-setup-4.5.7-1.el8ev.noarch.rpm ovirt-engine-dwh-setup-4.5.7-1.el8ev.noarch.rpm ovirt-engine-health-check-bundler-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-restapi-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-base-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-cinderlib-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-imageio-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-ovirt-engine-common-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-setup-plugin-websocket-proxy-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-tools-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-tools-backup-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-ui-extensions-1.3.6-1.el8ev.noarch.rpm ovirt-engine-vmconsole-proxy-helper-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-webadmin-portal-4.5.3.2-1.el8ev.noarch.rpm ovirt-engine-websocket-proxy-4.5.3.2-1.el8ev.noarch.rpm ovirt-web-ui-1.9.2-1.el8ev.noarch.rpm python3-ovirt-engine-lib-4.5.3.2-1.el8ev.noarch.rpm rhvm-4.5.3.2-1.el8ev.noarch.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-0155 https://access.redhat.com/security/cve/CVE-2022-2805 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY3UyLtzjgjWX9erEAQjacQ//emo9BwMrctxmlrqBwa5vAlrr2Kt3ZVCY hAHTbaUk+sXw9JxGeCZ/aD8/c6ij5oCprdMs4sOGmOfTHEkmj+GbPWfdEluoJvr0 PM001KBuucWC6YDaW/R3V20oZrqdRAlPX7yvTzxuNNlpnpmGx/UkAwB2GSechs91 kXp+E74e1RgOgbFRtzZcgfwCb0Df2Swi2vXdnPDfri5fRVztgwcrIcljLoTBkMy7 8M719eYwsuu1987MqSnIvBOHEj2oWN2IQJTaeNPoz3MqgvYKwqEdiozchJpWvXqi WddEaLT8S+1WhDf4VCIkdtIZrww/Ya2BxoFoEroCr7jTSDy9c9aFcnjn4wqnhO9s yqKfxpTWz9mpgTdHHT4FC06L9AUsxa/UaLKydO3tZhc+IjPH0O63SDBi/pZ5WVAH oCmYtRJA2OYlQABpHXR2x7Pj2Jv7JRNWHjGnabxWVoY6E09vdIrPliz0taPI59s7 YvNtXhkWPIa3w5kyibIxTVLqjR4gr2zrpPa2Oc6QGvEP9zyu59bAxoXKSQj0SYM8 BFykrVd3ahlPGFqOl6UBdvPJpXpJtNXK3lJBCGu2glFSwPXX26ij2fLUW3b7DnUC +xMPlL9m45KHx/Y7s4WnDvlvSNRjhy/Ttddgm/JwYOLxlzTWd1Qez/vfyDuIK7rk QvQket8bo7Q=xS+k -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202201-0429", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "follow-redirects", "scope": "lt", "trust": 1.0, "vendor": "follow redirects", "version": "1.14.7" }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "follow-redirects", "scope": null, "trust": 0.8, "vendor": "follow redirects", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "db": "NVD", "id": "CVE-2022-0155" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:follow-redirects_project:follow-redirects:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "1.14.7", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-0155" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "166204" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "169919" } ], "trust": 0.7 }, "cve": "CVE-2022-0155", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 4.3, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 4.3, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2022-0155", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.8, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N", "version": "3.1" }, { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "security@huntr.dev", "availabilityImpact": "HIGH", "baseScore": 8.0, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.1, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:R/S:U/C:H/I:H/A:H", "version": "3.0" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 6.5, "baseSeverity": "Medium", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2022-0155", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-0155", "trust": 1.8, "value": "MEDIUM" }, { "author": "security@huntr.dev", "id": "CVE-2022-0155", "trust": 1.0, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202201-685", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2022-0155", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0155" }, { "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "db": "NVD", "id": "CVE-2022-0155" }, { "db": "NVD", "id": "CVE-2022-0155" }, { "db": "CNNVD", "id": "CNNVD-202201-685" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor. Bugs fixed (https://bugzilla.redhat.com/):\n\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2032128 - Observability - dashboard name contains `/` would cause error when generating dashboard cm\n2033051 - ACM application placement fails after renaming the application name\n2039197 - disable the obs metric collect should not impact the managed cluster upgrade\n2039820 - Observability - cluster list should only contain OCP311 cluster on OCP311 dashboard\n2042223 - the value of name label changed from clusterclaim name to cluster name\n2043535 - CVE-2022-0144 nodejs-shelljs: improper privilege management\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048500 - VMWare Cluster creation does not accept ecdsa-sha2-nistp521 ssh keys\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature authenticated user can obtain the privileges of the System account\n2053211 - clusterSelector matchLabels spec are cleared when changing app name/namespace during creating an app in UI\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053279 - Application cluster status is not updated in UI after restoring\n2056610 - OpenStack cluster creation is using deprecated floating IP config for 4.7+\n2057249 - RHACM 2.4.3 images\n2059039 - The value of Vendor reported by cluster metrics was Other even if the vendor label in managedcluster was Openshift\n2059954 - Subscriptions stop reconciling after channel secrets are recreated\n2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path\n2074156 - Placementrule is not reconciling on a new fresh environment\n2074543 - The cluster claimed from clusterpool can not auto imported\n\n5. Summary:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.6 General\nAvailability\nrelease images, which provide security updates and bug fixes. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/\n\nSecurity updates:\n\n* Nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)\n\n* Nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* Golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* Follow-redirects: Exposure of Private Personal Information to an\nUnauthorized Actor (CVE-2022-0155)\n\nBug fixes:\n\n* Inform ACM policy is not checking properly the node fields (BZ# 2015588)\n\n* ImagePullPolicy is \"Always\" for multicluster-operators-subscription-rhel8\nimage (BZ# 2021128)\n\n* Traceback blocks reconciliation of helm repository hosted on AWS S3\nstorage (BZ# 2021576)\n\n* RHACM 2.3.6 images (BZ# 2029507)\n\n* Console UI enabled SNO UI Options not displayed during cluster creating\n(BZ# 2030002)\n\n* Grc pod restarts for each new GET request to the Governance Policy Page\n(BZ# 2037351)\n\n* Clustersets do not appear in UI (BZ# 2049810)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2015588 - Inform ACM policy is not checking properly the node fields\n2021128 - imagePullPolicy is \"Always\" for multicluster-operators-subscription-rhel8 image\n2021576 - traceback blocks reconciliation of helm repository hosted on AWS S3 storage\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2029507 - RHACM 2.3.6 images\n2030002 - Console UI enabled SNO UI Options not displayed during cluster creating\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2037351 - grc pod restarts for each new GET request to the Governance Policy Page\n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2049810 - Clustersets do not appear in UI\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.3] bug fix and security update\nAdvisory ID: RHSA-2022:8502-01\nProduct: Red Hat Virtualization\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:8502\nIssue date: 2022-11-16\nCVE Names: CVE-2022-0155 CVE-2022-2805\n====================================================================\n1. Summary:\n\nUpdated ovirt-engine packages that fix several bugs and add various\nenhancements are now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4 - noarch\n\n3. Description:\n\nThe ovirt-engine package provides the Red Hat Virtualization Manager, a\ncentralized management platform that allows system administrators to view\nand manage virtual machines. The Manager provides a comprehensive range of\nfeatures including search capabilities, resource management, live\nmigrations, and virtual infrastructure provisioning. \n\nBug Fix(es):\n\n* Ghost OVFs are written when using floating SD to migrate VMs between 2\nRHV environments. (BZ#1705338)\n\n* RHV engine is reporting a delete disk with wipe as completing\nsuccessfully when it actually fails from a timeout. (BZ#1836318)\n\n* [DR] Failover / Failback HA VM Fails to be started due to \u0027VM XXX is\nbeing imported\u0027 (BZ#1968433)\n\n* Virtual Machine with lease fails to run on DR failover (BZ#1974535)\n\n* Disk is missing after importing VM from Storage Domain that was detached\nfrom another DC. (BZ#1983567)\n\n* Unable to switch RHV host into maintenance mode as there are image\ntransfer in progress (BZ#2123141)\n\n* not able to import disk in 4.5.2 (BZ#2134549)\n\nEnhancement(s):\n\n* [RFE] Show last events for user VMs (BZ#1886211)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1705338 - Ghost OVFs are written when using floating SD to migrate VMs between 2 RHV environments. \n1836318 - RHV engine is reporting a delete disk with wipe as completing successfully when it actually fails from a timeout. \n1886211 - [RFE] Show last events for user VMs\n1968433 - [DR] Failover / Failback HA VM Fails to be started due to \u0027VM XXX is being imported\u0027\n1974535 - Virtual Machine with lease fails to run on DR failover\n1983567 - Disk is missing after importing VM from Storage Domain that was detached from another DC. \n2044556 - CVE-2022-0155 follow-redirects: Exposure of Private Personal Information to an Unauthorized Actor\n2079545 - CVE-2022-2805 ovirt-engine: RHVM admin password is logged unfiltered when using otopi-style\n2118672 - Use rpm instead of auto in package_facts ansible module to prevent mistakes of determining the correct package manager inside package_facts module\n2123141 - Unable to switch RHV host into maintenance mode as there are image transfer in progress\n2127836 - Create template dialog is not closed when clicking in OK and the template is not created\n2134549 - not able to import disk in 4.5.2\n2137207 - The RemoveDisk job finishes before the disk was removed from the DB\n\n6. Package List:\n\nRHEL-8-RHEV-S-4.4 - Red Hat Virtualization Engine 4.4:\n\nSource:\novirt-engine-4.5.3.2-1.el8ev.src.rpm\novirt-engine-dwh-4.5.7-1.el8ev.src.rpm\novirt-engine-ui-extensions-1.3.6-1.el8ev.src.rpm\novirt-web-ui-1.9.2-1.el8ev.src.rpm\n\nnoarch:\novirt-engine-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-backend-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-dbscripts-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-dwh-4.5.7-1.el8ev.noarch.rpm\novirt-engine-dwh-grafana-integration-setup-4.5.7-1.el8ev.noarch.rpm\novirt-engine-dwh-setup-4.5.7-1.el8ev.noarch.rpm\novirt-engine-health-check-bundler-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-restapi-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-base-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-cinderlib-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-imageio-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-ovirt-engine-common-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-vmconsole-proxy-helper-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-setup-plugin-websocket-proxy-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-tools-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-tools-backup-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-ui-extensions-1.3.6-1.el8ev.noarch.rpm\novirt-engine-vmconsole-proxy-helper-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-webadmin-portal-4.5.3.2-1.el8ev.noarch.rpm\novirt-engine-websocket-proxy-4.5.3.2-1.el8ev.noarch.rpm\novirt-web-ui-1.9.2-1.el8ev.noarch.rpm\npython3-ovirt-engine-lib-4.5.3.2-1.el8ev.noarch.rpm\nrhvm-4.5.3.2-1.el8ev.noarch.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-0155\nhttps://access.redhat.com/security/cve/CVE-2022-2805\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY3UyLtzjgjWX9erEAQjacQ//emo9BwMrctxmlrqBwa5vAlrr2Kt3ZVCY\nhAHTbaUk+sXw9JxGeCZ/aD8/c6ij5oCprdMs4sOGmOfTHEkmj+GbPWfdEluoJvr0\nPM001KBuucWC6YDaW/R3V20oZrqdRAlPX7yvTzxuNNlpnpmGx/UkAwB2GSechs91\nkXp+E74e1RgOgbFRtzZcgfwCb0Df2Swi2vXdnPDfri5fRVztgwcrIcljLoTBkMy7\n8M719eYwsuu1987MqSnIvBOHEj2oWN2IQJTaeNPoz3MqgvYKwqEdiozchJpWvXqi\nWddEaLT8S+1WhDf4VCIkdtIZrww/Ya2BxoFoEroCr7jTSDy9c9aFcnjn4wqnhO9s\nyqKfxpTWz9mpgTdHHT4FC06L9AUsxa/UaLKydO3tZhc+IjPH0O63SDBi/pZ5WVAH\noCmYtRJA2OYlQABpHXR2x7Pj2Jv7JRNWHjGnabxWVoY6E09vdIrPliz0taPI59s7\nYvNtXhkWPIa3w5kyibIxTVLqjR4gr2zrpPa2Oc6QGvEP9zyu59bAxoXKSQj0SYM8\nBFykrVd3ahlPGFqOl6UBdvPJpXpJtNXK3lJBCGu2glFSwPXX26ij2fLUW3b7DnUC\n+xMPlL9m45KHx/Y7s4WnDvlvSNRjhy/Ttddgm/JwYOLxlzTWd1Qez/vfyDuIK7rk\nQvQket8bo7Q=xS+k\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n", "sources": [ { "db": "NVD", "id": "CVE-2022-0155" }, { "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "db": "VULMON", "id": "CVE-2022-0155" }, { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "166204" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "169919" } ], "trust": 2.34 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-0155", "trust": 4.0 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.7 }, { "db": "JVN", "id": "JVNVU99475301", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-003215", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "166812", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "166516", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "166204", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "166946", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "166970", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169919", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5020", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1071", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5790", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5990", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3482", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071510", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022032840", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202201-685", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-0155", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166309", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0155" }, { "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "166204" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "169919" }, { "db": "NVD", "id": "CVE-2022-0155" }, { "db": "CNNVD", "id": "CNNVD-202201-685" } ] }, "id": "VAR-202201-0429", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:35:15.196000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Drop\u00a0Cookie\u00a0header\u00a0across\u00a0domains. Siemens Siemens\u00a0Security\u00a0Advisory", "trust": 0.8, "url": "https://github.com/follow-redirects/follow-redirects/commit/8b347cbcef7c7b72a6e9be20f5710c17d6163c22" }, { "title": "Follow Redirects Security vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=178984" }, { "title": "Red Hat: Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.3] bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228502 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.10 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221715 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.4 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221681 - security advisory" }, { "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.3.6 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220595 - security advisory" }, { "title": "IBM: Security Bulletin: IBM Security QRadar Analyst Workflow app for IBM QRadar SIEM is vulnerable to using components with known vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=e84bc00c9f55b86e956036a09317820b" }, { "title": "IBM: Security Bulletin: IBM Security QRadar Analyst Workflow app for IBM QRadar SIEM is vulnerable to using components with known vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=2f42526bdbba457e2271ed17ea2e3e9a" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.8 security and container updates", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221083 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.3 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221476 - security advisory" }, { "title": "IBM: Security Bulletin: IBM QRadar Assistant app for IBM QRadar SIEM includes components with multiple known vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=0c5e20c044e4005143b2303b28407553" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.2.11 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220856 - security advisory" }, { "title": "IBM: Security Bulletin: Netcool Operations Insight v1.6.6 contains fixes for multiple security vulnerabilities.", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=68c6989b84f14aaac220c13b754c7702" }, { "title": "ioBroker.switchbot-ble", "trust": 0.1, "url": "https://github.com/mrbungle64/iobroker.switchbot-ble " }, { "title": "node-red-contrib-ecovacs-deebot", "trust": 0.1, "url": "https://github.com/mrbungle64/node-red-contrib-ecovacs-deebot " }, { "title": "ioBroker.ecovacs-deebot", "trust": 0.1, "url": "https://github.com/mrbungle64/iobroker.ecovacs-deebot " }, { "title": "ecovacs-deebot.js", "trust": 0.1, "url": "https://github.com/mrbungle64/ecovacs-deebot.js " }, { "title": "ioBroker.e3dc-rscp", "trust": 0.1, "url": "https://github.com/git-kick/iobroker.e3dc-rscp " } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0155" }, { "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "db": "CNNVD", "id": "CNNVD-202201-685" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-359", "trust": 1.0 }, { "problemtype": "Disclosure of Personal Information to Unauthorized Actors (CWE-359) [ others ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "db": "NVD", "id": "CVE-2022-0155" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155" }, { "trust": 1.7, "url": "https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406" }, { "trust": 1.7, "url": "https://github.com/follow-redirects/follow-redirects/commit/8b347cbcef7c7b72a6e9be20f5710c17d6163c22" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu99475301/index.html" }, { "trust": 0.8, "url": "https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2022-0155" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.6, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071510" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166970/red-hat-security-advisory-2022-1715-01.html" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/node-js-follow-redirects-information-disclosure-via-cookie-header-38829" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/ibm-security-qradar-siem-information-disclosure-39657" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1071" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169919/red-hat-security-advisory-2022-8502-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166812/red-hat-security-advisory-2022-1476-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166516/red-hat-security-advisory-2022-1083-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5020" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5790" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3482" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5990" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022032840" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166946/red-hat-security-advisory-2022-1681-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166204/red-hat-security-advisory-2022-0595-02.html" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-0536" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536" }, { "trust": 0.4, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-22942" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0330" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-0920" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43565" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565" }, { "trust": 0.3, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index" }, { "trust": 0.3, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0185" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4122" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4155" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4019" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4192" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3984" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4193" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3872" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0413" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25236" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22827" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0392" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22824" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3999" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23308" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0516" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/index" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-45960" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-46143" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0361" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0847" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23852" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0261" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22826" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22825" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0359" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0413" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0435" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0492" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4154" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4154" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22822" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0144" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0318" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-22823" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24450" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25315" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23218" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0847" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25235" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0144" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21803" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1154" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24785" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24723" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24785" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1154" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25636" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25636" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1271" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4028" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4115" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24723" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4115" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4028" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21803" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0613" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0613" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/359.html" }, { "trust": 0.1, "url": "https://github.com/mrbungle64/iobroker.switchbot-ble" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.1, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-qradar-analyst-workflow-app-for-ibm-qradar-siem-is-vulnerable-to-using-components-with-known-vulnerabilities-2/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0465" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23434" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3564" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25710" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0466" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0856" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25214" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0465" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3752" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25709" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3573" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25214" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39241" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.2/html/release_notes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1476" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24778" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22825" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1083" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22823" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4034" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4034" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20321" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-42739" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25704" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3872" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4192" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-20612" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42739" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3984" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25704" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-42574" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0185" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4193" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4122" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36322" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-20612" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-20617" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20321" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4019" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-20617" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36322" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1681" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24773" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1365" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1365" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23555" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24450" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23555" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24773" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4083" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0711" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0711" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1715" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2805" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/2974891" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8502" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2805" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0155" }, { "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "166204" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "169919" }, { "db": "NVD", "id": "CVE-2022-0155" }, { "db": "CNNVD", "id": "CNNVD-202201-685" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-0155" }, { "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "db": "PACKETSTORM", "id": "166309" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "166516" }, { "db": "PACKETSTORM", "id": "166204" }, { "db": "PACKETSTORM", "id": "166946" }, { "db": "PACKETSTORM", "id": "166970" }, { "db": "PACKETSTORM", "id": "169919" }, { "db": "NVD", "id": "CVE-2022-0155" }, { "db": "CNNVD", "id": "CNNVD-202201-685" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-01-10T00:00:00", "db": "VULMON", "id": "CVE-2022-0155" }, { "date": "2023-02-10T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "date": "2022-03-15T15:44:21", "db": "PACKETSTORM", "id": "166309" }, { "date": "2022-04-21T15:12:25", "db": "PACKETSTORM", "id": "166812" }, { "date": "2022-03-29T15:53:19", "db": "PACKETSTORM", "id": "166516" }, { "date": "2022-03-04T16:17:56", "db": "PACKETSTORM", "id": "166204" }, { "date": "2022-05-04T05:42:06", "db": "PACKETSTORM", "id": "166946" }, { "date": "2022-05-05T17:33:41", "db": "PACKETSTORM", "id": "166970" }, { "date": "2022-11-17T13:22:54", "db": "PACKETSTORM", "id": "169919" }, { "date": "2022-01-10T20:15:08.177000", "db": "NVD", "id": "CVE-2022-0155" }, { "date": "2022-01-10T00:00:00", "db": "CNNVD", "id": "CNNVD-202201-685" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-10-28T00:00:00", "db": "VULMON", "id": "CVE-2022-0155" }, { "date": "2023-02-10T07:20:00", "db": "JVNDB", "id": "JVNDB-2022-003215" }, { "date": "2022-10-28T17:54:29.403000", "db": "NVD", "id": "CVE-2022-0155" }, { "date": "2022-11-18T00:00:00", "db": "CNNVD", "id": "CNNVD-202201-685" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202201-685" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "follow-redirects\u00a0 Personal Information Disclosure Vulnerability to Unauthorized Actors in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003215" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202201-685" } ], "trust": 0.6 } }
var-202301-0546
Vulnerability from variot
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product as well as with access to the SFTP server of the affected product (22/tcp), could potentially read and write arbitrary files from and to the device's file system. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202301-0546", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": "1.0 sp2 update 1" }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001807" }, { "db": "NVD", "id": "CVE-2022-45093" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-45093" } ] }, "cve": "CVE-2022-45093", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 8.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.8, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "productcert@siemens.com", "availabilityImpact": "HIGH", "baseScore": 8.5, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.8, "impactScore": 6.0, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "CHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 8.8, "baseSeverity": "High", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2022-45093", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "Low", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-45093", "trust": 1.8, "value": "HIGH" }, { "author": "productcert@siemens.com", "id": "CVE-2022-45093", "trust": 1.0, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202301-799", "trust": 0.6, "value": "HIGH" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001807" }, { "db": "NVD", "id": "CVE-2022-45093" }, { "db": "NVD", "id": "CVE-2022-45093" }, { "db": "CNNVD", "id": "CNNVD-202301-799" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product as well as with access to the SFTP server of the affected product (22/tcp), could potentially read and write arbitrary files from and to the device\u0027s file system. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state", "sources": [ { "db": "NVD", "id": "CVE-2022-45093" }, { "db": "JVNDB", "id": "JVNDB-2023-001807" } ], "trust": 1.62 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-45093", "trust": 3.2 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 1.6 }, { "db": "ICS CERT", "id": "ICSA-23-017-03", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU90782730", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2023-001807", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-202301-799", "trust": 0.6 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001807" }, { "db": "NVD", "id": "CVE-2022-45093" }, { "db": "CNNVD", "id": "CNNVD-202301-799" } ] }, "id": "VAR-202301-0546", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:15:36.233000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "SSA-332410", "trust": 0.8, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "title": "Siemens SINEC NMS Repair measures for path traversal vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=221681" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001807" }, { "db": "CNNVD", "id": "CNNVD-202301-799" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-22", "trust": 1.0 }, { "problemtype": "Path traversal (CWE-22) [ others ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001807" }, { "db": "NVD", "id": "CVE-2022-45093" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90782730/index.html" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-45093" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-45093/" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001807" }, { "db": "NVD", "id": "CVE-2022-45093" }, { "db": "CNNVD", "id": "CNNVD-202301-799" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "JVNDB", "id": "JVNDB-2023-001807" }, { "db": "NVD", "id": "CVE-2022-45093" }, { "db": "CNNVD", "id": "CNNVD-202301-799" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-05-16T00:00:00", "db": "JVNDB", "id": "JVNDB-2023-001807" }, { "date": "2023-01-10T12:15:23.523000", "db": "NVD", "id": "CVE-2022-45093" }, { "date": "2023-01-10T00:00:00", "db": "CNNVD", "id": "CNNVD-202301-799" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-05-16T03:25:00", "db": "JVNDB", "id": "JVNDB-2023-001807" }, { "date": "2023-01-14T00:43:41.810000", "db": "NVD", "id": "CVE-2022-45093" }, { "date": "2023-01-16T00:00:00", "db": "CNNVD", "id": "CNNVD-202301-799" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202301-799" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "SINEC\u00a0INS\u00a0 Past traversal vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001807" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "path traversal", "sources": [ { "db": "CNNVD", "id": "CNNVD-202301-799" } ], "trust": 0.6 } }
var-202210-0043
Vulnerability from variot
The llhttp parser in the http module in Node v18.7.0 does not correctly handle header fields that are not terminated with CLRF. This may result in HTTP Request Smuggling. Node.js Foundation of Node.js For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: nodejs:16 security update Advisory ID: RHSA-2022:6964-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:6964 Issue date: 2022-10-17 CVE Names: CVE-2022-35255 CVE-2022-35256 ==================================================================== 1. Summary:
An update for the nodejs:16 module is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.
The following packages have been upgraded to a later upstream version: nodejs 16.
Security Fix(es):
-
nodejs: weak randomness in WebCrypto keygen (CVE-2022-35255)
-
nodejs: HTTP Request Smuggling due to incorrect parsing of header fields (CVE-2022-35256)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2130517 - CVE-2022-35255 nodejs: weak randomness in WebCrypto keygen 2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields
- Package List:
Red Hat Enterprise Linux AppStream (v. 8):
Source: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.src.rpm nodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.src.rpm nodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.src.rpm
aarch64: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.aarch64.rpm
noarch: nodejs-docs-16.17.1-1.module+el8.6.0+16848+a483195a.noarch.rpm nodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.noarch.rpm nodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.noarch.rpm
ppc64le: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.ppc64le.rpm
s390x: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.s390x.rpm
x86_64: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-35255 https://access.redhat.com/security/cve/CVE-2022-35256 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY01tM9zjgjWX9erEAQgRRw/8DdK1QObq3so9+4ybaPFjCpdytAyNFy2E vrWNb7xRSO8myrQJ3cspxWMgRgfjMeJYPL8MT7iolW0SMWPd3uNMIh6ej3nK6zo+ BqHGgPBB2+knIF9ApMxW+2OpQAl4j0ICOeyLinqUXsyzDqPUOdW5kgNIPog668tc VsxB2Lt7pAJcpNkmwx6gvU5aZ6rWOUeNKyjAnat5AJPUx+NbtOtFWymivlPKCNWg bcGktfXz22tAixuEih9pC+YrPbJ++AHg5lZbK35uHBeGe7i9OdhbH8lbGrV5+0Vo 3DOlVTvuofjPZr0Ft50ChMsgsc/3pmBTXZOEfLrNHIMlJ2sHsP/3ZQ4hUmYYI3xs BF6HmgS4d3rEybSyXjqkQHKvSEi8KxBcs0y8RrvZeEUOfwTPwdaWKIhlzzn3lGYm a4iPlYzfCTfV4h2YdLvNE0hcOeaChiPVWvVxb9aV9XUW2ibWyHPSlJpBoP1UjMW4 8T0tYn6hUUWhWWT4cra5ipEjCmU9YfhdFsjoqKS/KFNA7kD94NSqWcbPs+3XnKbT l2IjXb8aBpn2Yykq1u4t12VEJCnKeTEUt43/LAlXW1mkNV3OQ2bPl2qwdEPTQxDP WBoK9aPtqD6W3VyuNza3VItmZKYw7nHtZL40YpvbdA6XtmlHZF6bFEiLdSwNduaV jippDtM0Pgw=vFcS -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Bugs fixed (https://bugzilla.redhat.com/):
2066009 - CVE-2021-44906 minimist: prototype pollution 2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address 2142823 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.4.0.z] 2150323 - CVE-2022-24999 express: "qs" prototype poisoning causes the hang of the node process 2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service 2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability 2168631 - CVE-2022-4904 c-ares: buffer overflow in config_sortlist() due to missing string length check 2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS 2171935 - CVE-2023-23918 Node.js: Permissions policies can be bypassed via process.mainModule 2172217 - CVE-2023-23920 Node.js: insecure loading of ICU data through ICU_DATA environment variable 2175828 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.4.0.z]
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq
Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548
Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.
For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.
We recommend that you upgrade your nodejs packages.
For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . ========================================================================== Ubuntu Security Notice USN-6491-1 November 21, 2023
nodejs vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 22.04 LTS
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS (Available with Ubuntu Pro)
Summary:
Several security issues were fixed in Node.js.
Software Description: - nodejs: An open-source, cross-platform JavaScript runtime environment.
Details:
Axel Chong discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. (CVE-2022-32212)
Zeyu Zhang discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-32213, CVE-2022-32214, CVE-2022-32215)
It was discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-35256)
It was discovered that Node.js incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-43548)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 22.04 LTS: libnode-dev 12.22.9~dfsg-1ubuntu3.2 libnode72 12.22.9~dfsg-1ubuntu3.2 nodejs 12.22.9~dfsg-1ubuntu3.2 nodejs-doc 12.22.9~dfsg-1ubuntu3.2
Ubuntu 20.04 LTS: libnode-dev 10.19.0~dfsg-3ubuntu1.3 libnode64 10.19.0~dfsg-3ubuntu1.3 nodejs 10.19.0~dfsg-3ubuntu1.3 nodejs-doc 10.19.0~dfsg-3ubuntu1.3
Ubuntu 18.04 LTS (Available with Ubuntu Pro): nodejs 8.10.0~dfsg-2ubuntu0.4+esm4 nodejs-dev 8.10.0~dfsg-2ubuntu0.4+esm4 nodejs-doc 8.10.0~dfsg-2ubuntu0.4+esm4
In general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29
https://security.gentoo.org/
Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29
Synopsis
Multiple vulnerabilities have been discovered in Node.js.
Background
Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.
Affected packages
Package Vulnerable Unaffected
net-libs/nodejs < 16.20.2 >= 16.20.2
Description
Multiple vulnerabilities have been discovered in Node.js. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Node.js 20 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"
All Node.js 18 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"
All Node.js 16 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"
References
[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202405-29
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202210-0043", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.15.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.0.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "14.14.0" }, { "model": "llhttp", "scope": "lt", "trust": 1.0, "vendor": "llhttp", "version": "6.0.10" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "16.12.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.20.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.13.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "16.17.1" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "18.9.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "18.0.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "node.js", "scope": null, "trust": 0.8, "vendor": "node js", "version": null }, { "model": "llhttp", "scope": null, "trust": 0.8, "vendor": "llhttp", "version": null }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-022575" }, { "db": "NVD", "id": "CVE-2022-35256" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "14.14.0", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "16.12.0", "versionStartIncluding": "16.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "14.20.1", "versionStartIncluding": "14.15.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "16.17.1", "versionStartIncluding": "16.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "18.9.1", "versionStartIncluding": "18.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:llhttp:llhttp:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "6.0.10", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-35256" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "171666" }, { "db": "PACKETSTORM", "id": "169781" }, { "db": "PACKETSTORM", "id": "169779" } ], "trust": 0.5 }, "cve": "CVE-2022-35256", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.5, "baseSeverity": "MEDIUM", "confidentialityImpact": "LOW", "exploitabilityScore": 3.9, "impactScore": 2.5, "integrityImpact": "LOW", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 6.5, "baseSeverity": "Medium", "confidentialityImpact": "Low", "exploitabilityScore": null, "id": "CVE-2022-35256", "impactScore": null, "integrityImpact": "Low", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-35256", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202210-1266", "trust": 0.6, "value": "MEDIUM" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-022575" }, { "db": "CNNVD", "id": "CNNVD-202210-1266" }, { "db": "NVD", "id": "CVE-2022-35256" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The llhttp parser in the http module in Node v18.7.0 does not correctly handle header fields that are not terminated with CLRF. This may result in HTTP Request Smuggling. Node.js Foundation of Node.js For products from other vendors, HTTP There is a vulnerability related to request smuggling.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: nodejs:16 security update\nAdvisory ID: RHSA-2022:6964-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6964\nIssue date: 2022-10-17\nCVE Names: CVE-2022-35255 CVE-2022-35256\n====================================================================\n1. Summary:\n\nAn update for the nodejs:16 module is now available for Red Hat Enterprise\nLinux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nnodejs 16. \n\nSecurity Fix(es):\n\n* nodejs: weak randomness in WebCrypto keygen (CVE-2022-35255)\n\n* nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n(CVE-2022-35256)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2130517 - CVE-2022-35255 nodejs: weak randomness in WebCrypto keygen\n2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.src.rpm\nnodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.src.rpm\nnodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.src.rpm\n\naarch64:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.aarch64.rpm\n\nnoarch:\nnodejs-docs-16.17.1-1.module+el8.6.0+16848+a483195a.noarch.rpm\nnodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.noarch.rpm\nnodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.noarch.rpm\n\nppc64le:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.ppc64le.rpm\n\ns390x:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.s390x.rpm\n\nx86_64:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-35255\nhttps://access.redhat.com/security/cve/CVE-2022-35256\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY01tM9zjgjWX9erEAQgRRw/8DdK1QObq3so9+4ybaPFjCpdytAyNFy2E\nvrWNb7xRSO8myrQJ3cspxWMgRgfjMeJYPL8MT7iolW0SMWPd3uNMIh6ej3nK6zo+\nBqHGgPBB2+knIF9ApMxW+2OpQAl4j0ICOeyLinqUXsyzDqPUOdW5kgNIPog668tc\nVsxB2Lt7pAJcpNkmwx6gvU5aZ6rWOUeNKyjAnat5AJPUx+NbtOtFWymivlPKCNWg\nbcGktfXz22tAixuEih9pC+YrPbJ++AHg5lZbK35uHBeGe7i9OdhbH8lbGrV5+0Vo\n3DOlVTvuofjPZr0Ft50ChMsgsc/3pmBTXZOEfLrNHIMlJ2sHsP/3ZQ4hUmYYI3xs\nBF6HmgS4d3rEybSyXjqkQHKvSEi8KxBcs0y8RrvZeEUOfwTPwdaWKIhlzzn3lGYm\na4iPlYzfCTfV4h2YdLvNE0hcOeaChiPVWvVxb9aV9XUW2ibWyHPSlJpBoP1UjMW4\n8T0tYn6hUUWhWWT4cra5ipEjCmU9YfhdFsjoqKS/KFNA7kD94NSqWcbPs+3XnKbT\nl2IjXb8aBpn2Yykq1u4t12VEJCnKeTEUt43/LAlXW1mkNV3OQ2bPl2qwdEPTQxDP\nWBoK9aPtqD6W3VyuNza3VItmZKYw7nHtZL40YpvbdA6XtmlHZF6bFEiLdSwNduaV\njippDtM0Pgw=vFcS\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address\n2142823 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.4.0.z]\n2150323 - CVE-2022-24999 express: \"qs\" prototype poisoning causes the hang of the node process\n2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service\n2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability\n2168631 - CVE-2022-4904 c-ares: buffer overflow in config_sortlist() due to missing string length check\n2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS\n2171935 - CVE-2023-23918 Node.js: Permissions policies can be bypassed via process.mainModule\n2172217 - CVE-2023-23920 Node.js: insecure loading of ICU data through ICU_DATA environment variable\n2175828 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.4.0.z]\n\n6. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1 security@debian.org\nhttps://www.debian.org/security/ Aron Xu\nJanuary 24, 2023 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : nodejs\nCVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. ==========================================================================\nUbuntu Security Notice USN-6491-1\nNovember 21, 2023\n\nnodejs vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 22.04 LTS\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS (Available with Ubuntu Pro)\n\nSummary:\n\nSeveral security issues were fixed in Node.js. \n\nSoftware Description:\n- nodejs: An open-source, cross-platform JavaScript runtime environment. \n\nDetails:\n\nAxel Chong discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to execute\narbitrary code. (CVE-2022-32212)\n\nZeyu Zhang discovered that Node.js incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to execute\narbitrary code. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-32213,\nCVE-2022-32214, CVE-2022-32215)\n\nIt was discovered that Node.js incorrectly handled certain inputs. If a user\nor an automated system were tricked into opening a specially crafted input\nfile, a remote attacker could possibly use this issue to execute arbitrary\ncode. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-35256)\n\nIt was discovered that Node.js incorrectly handled certain inputs. If a user\nor an automated system were tricked into opening a specially crafted input\nfile, a remote attacker could possibly use this issue to execute arbitrary\ncode. This issue only affected Ubuntu 22.04 LTS. (CVE-2022-43548)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 22.04 LTS:\n libnode-dev 12.22.9~dfsg-1ubuntu3.2\n libnode72 12.22.9~dfsg-1ubuntu3.2\n nodejs 12.22.9~dfsg-1ubuntu3.2\n nodejs-doc 12.22.9~dfsg-1ubuntu3.2\n\nUbuntu 20.04 LTS:\n libnode-dev 10.19.0~dfsg-3ubuntu1.3\n libnode64 10.19.0~dfsg-3ubuntu1.3\n nodejs 10.19.0~dfsg-3ubuntu1.3\n nodejs-doc 10.19.0~dfsg-3ubuntu1.3\n\nUbuntu 18.04 LTS (Available with Ubuntu Pro):\n nodejs 8.10.0~dfsg-2ubuntu0.4+esm4\n nodejs-dev 8.10.0~dfsg-2ubuntu0.4+esm4\n nodejs-doc 8.10.0~dfsg-2ubuntu0.4+esm4\n\nIn general, a standard system update will make all the necessary changes. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: Node.js: Multiple Vulnerabilities\n Date: May 08, 2024\n Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. \n\nAffected packages\n================\nPackage Vulnerable Unaffected\n--------------- ------------ ------------\nnet-libs/nodejs \u003c 16.20.2 \u003e= 16.20.2\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in Node.js. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2022-35256" }, { "db": "JVNDB", "id": "JVNDB-2022-022575" }, { "db": "VULMON", "id": "CVE-2022-35256" }, { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "171666" }, { "db": "PACKETSTORM", "id": "169781" }, { "db": "PACKETSTORM", "id": "169779" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "175817" }, { "db": "PACKETSTORM", "id": "178512" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-35256", "trust": 4.1 }, { "db": "HACKERONE", "id": "1675191", "trust": 2.4 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 2.4 }, { "db": "ICS CERT", "id": "ICSA-23-017-03", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU90782730", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-022575", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "169781", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "170727", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169408", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "169437", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.6632", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2023.1926", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5146", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202210-1266", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-35256", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168757", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171839", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171666", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169779", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "175817", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "178512", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-35256" }, { "db": "JVNDB", "id": "JVNDB-2022-022575" }, { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "171666" }, { "db": "PACKETSTORM", "id": "169781" }, { "db": "PACKETSTORM", "id": "169779" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "175817" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202210-1266" }, { "db": "NVD", "id": "CVE-2022-35256" } ] }, "id": "VAR-202210-0043", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-07-23T21:44:46.557000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Node.js Remediation measures for environmental problem vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=219729" }, { "title": "Red Hat: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-35256" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-35256" }, { "db": "CNNVD", "id": "CNNVD-202210-1266" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-444", "trust": 1.0 }, { "problemtype": "HTTP Request Smuggling (CWE-444) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-022575" }, { "db": "NVD", "id": "CVE-2022-35256" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.4, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 2.4, "url": "https://hackerone.com/reports/1675191" }, { "trust": 2.4, "url": "https://www.debian.org/security/2023/dsa-5326" }, { "trust": 1.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90782730/" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-35256" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169408/red-hat-security-advisory-2022-6963-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2023.1926" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169781/red-hat-security-advisory-2022-7830-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5146" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169437/red-hat-security-advisory-2022-7044-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.6632" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-35256/" }, { "trust": 0.5, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-35255" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-3517" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2023-23918" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-35065" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3517" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-43548" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24999" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24999" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38900" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4904" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44906" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-44906" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-44533" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2023-23920" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-44532" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25881" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-44531" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21824" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-4904" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25881" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-38900" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6964" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:1742" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:1533" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-23920" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:7830" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:7821" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/nodejs" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/nodejs/12.22.9~dfsg-1ubuntu3.2" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-6491-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/nodejs/10.19.0~dfsg-3ubuntu1.3" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959" }, { "trust": 0.1, "url": "https://security.gentoo.org/glsa/202405-29" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-35256" }, { "db": "JVNDB", "id": "JVNDB-2022-022575" }, { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "171666" }, { "db": "PACKETSTORM", "id": "169781" }, { "db": "PACKETSTORM", "id": "169779" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "175817" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202210-1266" }, { "db": "NVD", "id": "CVE-2022-35256" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-35256" }, { "db": "JVNDB", "id": "JVNDB-2022-022575" }, { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "171666" }, { "db": "PACKETSTORM", "id": "169781" }, { "db": "PACKETSTORM", "id": "169779" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "175817" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202210-1266" }, { "db": "NVD", "id": "CVE-2022-35256" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-17T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-022575" }, { "date": "2022-10-18T14:27:29", "db": "PACKETSTORM", "id": "168757" }, { "date": "2023-04-12T16:57:08", "db": "PACKETSTORM", "id": "171839" }, { "date": "2023-04-03T17:32:27", "db": "PACKETSTORM", "id": "171666" }, { "date": "2022-11-08T13:50:47", "db": "PACKETSTORM", "id": "169781" }, { "date": "2022-11-08T13:50:31", "db": "PACKETSTORM", "id": "169779" }, { "date": "2023-01-25T16:09:12", "db": "PACKETSTORM", "id": "170727" }, { "date": "2023-11-21T16:00:44", "db": "PACKETSTORM", "id": "175817" }, { "date": "2024-05-09T15:46:44", "db": "PACKETSTORM", "id": "178512" }, { "date": "2022-10-18T00:00:00", "db": "CNNVD", "id": "CNNVD-202210-1266" }, { "date": "2022-12-05T22:15:10.570000", "db": "NVD", "id": "CVE-2022-35256" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-17T08:21:00", "db": "JVNDB", "id": "JVNDB-2022-022575" }, { "date": "2023-04-04T00:00:00", "db": "CNNVD", "id": "CNNVD-202210-1266" }, { "date": "2023-05-12T13:30:33.190000", "db": "NVD", "id": "CVE-2022-35256" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "175817" }, { "db": "CNNVD", "id": "CNNVD-202210-1266" } ], "trust": 0.7 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Node.js\u00a0Foundation\u00a0 of \u00a0Node.js\u00a0 in products from other multiple vendors \u00a0HTTP\u00a0 Request Smuggling Vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-022575" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "environmental issue", "sources": [ { "db": "CNNVD", "id": "CNNVD-202210-1266" } ], "trust": 0.6 } }
var-202005-0397
Vulnerability from variot
json-c through 0.14 has an integer overflow and out-of-bounds write via a large JSON file, as demonstrated by printbuf_memappend. Summary:
An update is now available for OpenShift Logging 5.1. Solution:
For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this errata update:
https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html
For Red Hat OpenShift Logging 5.1, see the following instructions to apply this update:
https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html
- Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable
- Solution:
OSP 16.2.z Release - OSP Director Operator Containers
- Bugs fixed (https://bugzilla.redhat.com/):
2025995 - Rebase tech preview on latest upstream v1.2.x branch 2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache 2036784 - osp controller (fencing enabled) in downed state after system manual crash test
- Bugs fixed (https://bugzilla.redhat.com/):
1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic
- Summary:
Red Hat OpenShift Virtualization release 4.8.3 is now available with updates to packages and images that fix several bugs and add enhancements. Description:
OpenShift Virtualization is Red Hat's virtualization solution designed for Red Hat OpenShift Container Platform. Bugs fixed (https://bugzilla.redhat.com/):
1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1997017 - unprivileged client fails to get guest agent data 1998855 - Node drain: Sometimes source virt-launcher pod status is Failed and not Completed 2000251 - RoleBinding and ClusterRoleBinding brought in by kubevirt does not get reconciled when kind is ServiceAccount 2001270 - [VMIO] [Warm from Vmware] Snapshot files are not deleted after Successful Import 2001281 - [VMIO] [Warm from VMware] Source VM should not be turned ON if vmio import is removed 2001901 - [4.8.3] NNCP creation failures after nmstate-handler pod deletion 2007336 - 4.8.3 containers 2007776 - Failed to Migrate Windows VM with CDROM (readonly) 2008511 - [CNV-4.8.3] VMI is in LiveMigrate loop when Upgrading Cluster from 2.6.7/4.7.32 to OCP 4.8.13 2012890 - With descheduler during multiple VMIs migrations, some VMs are restarted 2025475 - [4.8.3] Upgrade from 2.6 to 4.x versions failed due to vlan-filtering issues 2026881 - [4.8.3] vlan-filtering is getting applied on veth ports
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: Migration Toolkit for Containers (MTC) 1.5.2 security update and bugfix advisory Advisory ID: RHSA-2021:4848-01 Product: Red Hat Migration Toolkit Advisory URL: https://access.redhat.com/errata/RHSA-2021:4848 Issue date: 2021-11-29 CVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14145 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2021-3200 CVE-2021-3445 CVE-2021-3580 CVE-2021-3620 CVE-2021-3733 CVE-2021-3757 CVE-2021-3778 CVE-2021-3796 CVE-2021-3800 CVE-2021-3948 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-22946 CVE-2021-22947 CVE-2021-23840 CVE-2021-23841 CVE-2021-27218 CVE-2021-27645 CVE-2021-28153 CVE-2021-33560 CVE-2021-33574 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-36222 CVE-2021-37750 ==================================================================== 1. Summary:
The Migration Toolkit for Containers (MTC) 1.5.2 is now available.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API.
Security Fix(es):
-
nodejs-immer: prototype pollution may lead to DoS or remote code execution (CVE-2021-3757)
-
mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to install and use MTC, refer to:
https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html
- Bugs fixed (https://bugzilla.redhat.com/):
2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution 2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport) 2006842 - MigCluster CR remains in "unready" state and source registry is inaccessible after temporary shutdown of source cluster 2007429 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)
- References:
https://access.redhat.com/security/cve/CVE-2018-20673 https://access.redhat.com/security/cve/CVE-2019-5827 https://access.redhat.com/security/cve/CVE-2019-13750 https://access.redhat.com/security/cve/CVE-2019-13751 https://access.redhat.com/security/cve/CVE-2019-17594 https://access.redhat.com/security/cve/CVE-2019-17595 https://access.redhat.com/security/cve/CVE-2019-18218 https://access.redhat.com/security/cve/CVE-2019-19603 https://access.redhat.com/security/cve/CVE-2019-20838 https://access.redhat.com/security/cve/CVE-2020-12762 https://access.redhat.com/security/cve/CVE-2020-13435 https://access.redhat.com/security/cve/CVE-2020-14145 https://access.redhat.com/security/cve/CVE-2020-14155 https://access.redhat.com/security/cve/CVE-2020-16135 https://access.redhat.com/security/cve/CVE-2020-24370 https://access.redhat.com/security/cve/CVE-2021-3200 https://access.redhat.com/security/cve/CVE-2021-3445 https://access.redhat.com/security/cve/CVE-2021-3580 https://access.redhat.com/security/cve/CVE-2021-3620 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3757 https://access.redhat.com/security/cve/CVE-2021-3778 https://access.redhat.com/security/cve/CVE-2021-3796 https://access.redhat.com/security/cve/CVE-2021-3800 https://access.redhat.com/security/cve/CVE-2021-3948 https://access.redhat.com/security/cve/CVE-2021-20231 https://access.redhat.com/security/cve/CVE-2021-20232 https://access.redhat.com/security/cve/CVE-2021-20266 https://access.redhat.com/security/cve/CVE-2021-22876 https://access.redhat.com/security/cve/CVE-2021-22898 https://access.redhat.com/security/cve/CVE-2021-22925 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-27645 https://access.redhat.com/security/cve/CVE-2021-28153 https://access.redhat.com/security/cve/CVE-2021-33560 https://access.redhat.com/security/cve/CVE-2021-33574 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-35942 https://access.redhat.com/security/cve/CVE-2021-36084 https://access.redhat.com/security/cve/CVE-2021-36085 https://access.redhat.com/security/cve/CVE-2021-36086 https://access.redhat.com/security/cve/CVE-2021-36087 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYaU7CNzjgjWX9erEAQi5qxAAi3QQYaLAZJEIsb+WHT9YHKG3jMM6xLtl 2TpkFYEcW+sCSi7MfKUnjpuXgZtu23cM5QHPWcEd0WhvPNtPX9Z7PqKlixPoPRvv 36cH/uIGluB8h9U4NUdvYuuv/z748mpLKRI/8tRyEe4bFiL/lwn9T9OqvK296KEU Ua6SwzLUyDgwqpICh2fopfebTF80BnjhAs1t1R9eWEQrq28FxZUEBsAtla4hAR3p 89hXXYgg/b7HjBz5XJBVKgUIhs2zYTy/9R1D/FClLyu+ZkZfNspZGMQ1PpdV8nUs /g/u4KrR0tgYs9M5UQVHPHl5FiCf+9yeNt91biCxoidWp5qH6M+fGQ2EPOoATCBv yTau8U82gjxnRrSEn5Tp+r8i7Ra7k6GJ0n3Vt3x5LFIVTzZgsHAQKzhYnMiQmrgQ qZLLIvJ3BY9jTtMp2MXh4tNiuMk8kWOHzKvw10M93HaUSaSBEW19mljxnpvtbn8F 6XFCGHp7VtVwd8SYW1Epqiyex1NkXE/D/7G5L1rfi+x1LAAHEGrGMFMKijnuaWCA TfHt/Jklzuy7S23cnBQsxOVCtmav54nOWikhNyEMJ3q8Nd5ddXzhoZqWiVZJ2Vyu R4MUojb+mhzX3nG1+9Qd5AzTLs/SctmVd1gMtiv8LE0fsKisNT/LYreZ/7FFypNW vLrDSONMTWU\xe7ia -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce .
The following data is constructed from data provided by Red Hat's json file at:
https://access.redhat.com/security/data/csaf/v2/advisories/2023/rhsa-2023_6431.json
Red Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat's archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment.
Description:
The libfastjson library provides essential JavaScript Object Notation (JSON) handling functions. The library enables users to construct JSON objects in C, output them as JSON-formatted strings, and convert JSON-formatted strings back to the C representation of JSON objects.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 9.3 Release Notes linked from the References section
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202005-0397", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "18.04" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "31" }, { "model": "json-c", "scope": "lt", "trust": 1.0, "vendor": "json c", "version": "0.15-20200726" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": null }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "32" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "19.10" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "8.0" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "14.04" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "16.04" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "20.04" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "30" }, { "model": "ubuntu linux", "scope": "eq", "trust": 1.0, "vendor": "canonical", "version": "12.04" } ], "sources": [ { "db": "NVD", "id": "CVE-2020-12762" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:json-c:json-c:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "0.15-20200726", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:30:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:31:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:32:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:8.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:18.04:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:14.04:*:*:*:esm:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:19.10:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:20.04:*:*:*:lts:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:16.04:*:*:*:esm:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:canonical:ubuntu_linux:12.04:*:*:*:-:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-12762" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "166308" }, { "db": "PACKETSTORM", "id": "166789" }, { "db": "PACKETSTORM", "id": "165135" }, { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "176732" }, { "db": "PACKETSTORM", "id": "175527" }, { "db": "PACKETSTORM", "id": "177428" }, { "db": "PACKETSTORM", "id": "177472" } ], "trust": 1.0 }, "cve": "CVE-2020-12762", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 6.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 6.4, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "PARTIAL", "baseScore": 6.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "CVE-2020-12762", "impactScore": 6.4, "integrityImpact": "PARTIAL", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "MEDIUM", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.8, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-12762", "trust": 1.0, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2020-12762", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-12762" }, { "db": "NVD", "id": "CVE-2020-12762" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "json-c through 0.14 has an integer overflow and out-of-bounds write via a large JSON file, as demonstrated by printbuf_memappend. Summary:\n\nAn update is now available for OpenShift Logging 5.1. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html\n\nFor Red Hat OpenShift Logging 5.1, see the following instructions to apply\nthis update:\n\nhttps://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. Solution:\n\nOSP 16.2.z Release - OSP Director Operator Containers\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2025995 - Rebase tech preview on latest upstream v1.2.x branch\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2036784 - osp controller (fencing enabled) in downed state after system manual crash test\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1995656 - CVE-2021-36221 golang: net/http/httputil: panic due to racy read of persistConn after handler panic\n\n5. Summary:\n\nRed Hat OpenShift Virtualization release 4.8.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. Description:\n\nOpenShift Virtualization is Red Hat\u0027s virtualization solution designed for\nRed Hat OpenShift Container Platform. Bugs fixed (https://bugzilla.redhat.com/):\n\n1983596 - CVE-2021-34558 golang: crypto/tls: certificate of wrong type is causing TLS client to panic\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1997017 - unprivileged client fails to get guest agent data\n1998855 - Node drain: Sometimes source virt-launcher pod status is Failed and not Completed\n2000251 - RoleBinding and ClusterRoleBinding brought in by kubevirt does not get reconciled when kind is ServiceAccount\n2001270 - [VMIO] [Warm from Vmware] Snapshot files are not deleted after Successful Import\n2001281 - [VMIO] [Warm from VMware] Source VM should not be turned ON if vmio import is removed\n2001901 - [4.8.3] NNCP creation failures after nmstate-handler pod deletion\n2007336 - 4.8.3 containers\n2007776 - Failed to Migrate Windows VM with CDROM (readonly)\n2008511 - [CNV-4.8.3] VMI is in LiveMigrate loop when Upgrading Cluster from 2.6.7/4.7.32 to OCP 4.8.13\n2012890 - With descheduler during multiple VMIs migrations, some VMs are restarted\n2025475 - [4.8.3] Upgrade from 2.6 to 4.x versions failed due to vlan-filtering issues\n2026881 - [4.8.3] vlan-filtering is getting applied on veth ports\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: Migration Toolkit for Containers (MTC) 1.5.2 security update and bugfix advisory\nAdvisory ID: RHSA-2021:4848-01\nProduct: Red Hat Migration Toolkit\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:4848\nIssue date: 2021-11-29\nCVE Names: CVE-2018-20673 CVE-2019-5827 CVE-2019-13750\n CVE-2019-13751 CVE-2019-17594 CVE-2019-17595\n CVE-2019-18218 CVE-2019-19603 CVE-2019-20838\n CVE-2020-12762 CVE-2020-13435 CVE-2020-14145\n CVE-2020-14155 CVE-2020-16135 CVE-2020-24370\n CVE-2021-3200 CVE-2021-3445 CVE-2021-3580\n CVE-2021-3620 CVE-2021-3733 CVE-2021-3757\n CVE-2021-3778 CVE-2021-3796 CVE-2021-3800\n CVE-2021-3948 CVE-2021-20231 CVE-2021-20232\n CVE-2021-20266 CVE-2021-22876 CVE-2021-22898\n CVE-2021-22925 CVE-2021-22946 CVE-2021-22947\n CVE-2021-23840 CVE-2021-23841 CVE-2021-27218\n CVE-2021-27645 CVE-2021-28153 CVE-2021-33560\n CVE-2021-33574 CVE-2021-33928 CVE-2021-33929\n CVE-2021-33930 CVE-2021-33938 CVE-2021-35942\n CVE-2021-36084 CVE-2021-36085 CVE-2021-36086\n CVE-2021-36087 CVE-2021-36222 CVE-2021-37750\n====================================================================\n1. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.5.2 is now available. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. \n\nSecurity Fix(es):\n\n* nodejs-immer: prototype pollution may lead to DoS or remote code\nexecution (CVE-2021-3757)\n\n* mig-controller: incorrect namespaces handling may lead to not authorized\nusage of Migration Toolkit for Containers (MTC) (CVE-2021-3948)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n3. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2000734 - CVE-2021-3757 nodejs-immer: prototype pollution may lead to DoS or remote code execution\n2005438 - Combining Rsync and Stunnel in a single pod can degrade performance (1.5 backport)\n2006842 - MigCluster CR remains in \"unready\" state and source registry is inaccessible after temporary shutdown of source cluster\n2007429 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n2022017 - CVE-2021-3948 mig-controller: incorrect namespaces handling may lead to not authorized usage of Migration Toolkit for Containers (MTC)\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2018-20673\nhttps://access.redhat.com/security/cve/CVE-2019-5827\nhttps://access.redhat.com/security/cve/CVE-2019-13750\nhttps://access.redhat.com/security/cve/CVE-2019-13751\nhttps://access.redhat.com/security/cve/CVE-2019-17594\nhttps://access.redhat.com/security/cve/CVE-2019-17595\nhttps://access.redhat.com/security/cve/CVE-2019-18218\nhttps://access.redhat.com/security/cve/CVE-2019-19603\nhttps://access.redhat.com/security/cve/CVE-2019-20838\nhttps://access.redhat.com/security/cve/CVE-2020-12762\nhttps://access.redhat.com/security/cve/CVE-2020-13435\nhttps://access.redhat.com/security/cve/CVE-2020-14145\nhttps://access.redhat.com/security/cve/CVE-2020-14155\nhttps://access.redhat.com/security/cve/CVE-2020-16135\nhttps://access.redhat.com/security/cve/CVE-2020-24370\nhttps://access.redhat.com/security/cve/CVE-2021-3200\nhttps://access.redhat.com/security/cve/CVE-2021-3445\nhttps://access.redhat.com/security/cve/CVE-2021-3580\nhttps://access.redhat.com/security/cve/CVE-2021-3620\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3757\nhttps://access.redhat.com/security/cve/CVE-2021-3778\nhttps://access.redhat.com/security/cve/CVE-2021-3796\nhttps://access.redhat.com/security/cve/CVE-2021-3800\nhttps://access.redhat.com/security/cve/CVE-2021-3948\nhttps://access.redhat.com/security/cve/CVE-2021-20231\nhttps://access.redhat.com/security/cve/CVE-2021-20232\nhttps://access.redhat.com/security/cve/CVE-2021-20266\nhttps://access.redhat.com/security/cve/CVE-2021-22876\nhttps://access.redhat.com/security/cve/CVE-2021-22898\nhttps://access.redhat.com/security/cve/CVE-2021-22925\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-27645\nhttps://access.redhat.com/security/cve/CVE-2021-28153\nhttps://access.redhat.com/security/cve/CVE-2021-33560\nhttps://access.redhat.com/security/cve/CVE-2021-33574\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-35942\nhttps://access.redhat.com/security/cve/CVE-2021-36084\nhttps://access.redhat.com/security/cve/CVE-2021-36085\nhttps://access.redhat.com/security/cve/CVE-2021-36086\nhttps://access.redhat.com/security/cve/CVE-2021-36087\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYaU7CNzjgjWX9erEAQi5qxAAi3QQYaLAZJEIsb+WHT9YHKG3jMM6xLtl\n2TpkFYEcW+sCSi7MfKUnjpuXgZtu23cM5QHPWcEd0WhvPNtPX9Z7PqKlixPoPRvv\n36cH/uIGluB8h9U4NUdvYuuv/z748mpLKRI/8tRyEe4bFiL/lwn9T9OqvK296KEU\nUa6SwzLUyDgwqpICh2fopfebTF80BnjhAs1t1R9eWEQrq28FxZUEBsAtla4hAR3p\n89hXXYgg/b7HjBz5XJBVKgUIhs2zYTy/9R1D/FClLyu+ZkZfNspZGMQ1PpdV8nUs\n/g/u4KrR0tgYs9M5UQVHPHl5FiCf+9yeNt91biCxoidWp5qH6M+fGQ2EPOoATCBv\nyTau8U82gjxnRrSEn5Tp+r8i7Ra7k6GJ0n3Vt3x5LFIVTzZgsHAQKzhYnMiQmrgQ\nqZLLIvJ3BY9jTtMp2MXh4tNiuMk8kWOHzKvw10M93HaUSaSBEW19mljxnpvtbn8F\n6XFCGHp7VtVwd8SYW1Epqiyex1NkXE/D/7G5L1rfi+x1LAAHEGrGMFMKijnuaWCA\nTfHt/Jklzuy7S23cnBQsxOVCtmav54nOWikhNyEMJ3q8Nd5ddXzhoZqWiVZJ2Vyu\nR4MUojb+mhzX3nG1+9Qd5AzTLs/SctmVd1gMtiv8LE0fsKisNT/LYreZ/7FFypNW\nvLrDSONMTWU\\xe7ia\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. \n\nThe following data is constructed from data provided by Red Hat\u0027s json file at:\n\nhttps://access.redhat.com/security/data/csaf/v2/advisories/2023/rhsa-2023_6431.json\n\nRed Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat\u0027s archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment. \n\n\n\n\nDescription:\n\nThe libfastjson library provides essential JavaScript Object Notation (JSON) handling functions. The library enables users to construct JSON objects in C, output them as JSON-formatted strings, and convert JSON-formatted strings back to the C representation of JSON objects. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat Enterprise Linux 9.3 Release Notes linked from the References section", "sources": [ { "db": "NVD", "id": "CVE-2020-12762" }, { "db": "VULMON", "id": "CVE-2020-12762" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "166308" }, { "db": "PACKETSTORM", "id": "166789" }, { "db": "PACKETSTORM", "id": "165135" }, { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "176732" }, { "db": "PACKETSTORM", "id": "175527" }, { "db": "PACKETSTORM", "id": "177428" }, { "db": "PACKETSTORM", "id": "177472" } ], "trust": 1.89 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-12762", "trust": 2.1 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.1 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-12762", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165286", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165288", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166308", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166789", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165135", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165099", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "176732", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "175527", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "177428", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "177472", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-12762" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "166308" }, { "db": "PACKETSTORM", "id": "166789" }, { "db": "PACKETSTORM", "id": "165135" }, { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "176732" }, { "db": "PACKETSTORM", "id": "175527" }, { "db": "PACKETSTORM", "id": "177428" }, { "db": "PACKETSTORM", "id": "177472" }, { "db": "NVD", "id": "CVE-2020-12762" } ] }, "id": "VAR-202005-0397", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-07-23T21:34:58.765000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Ubuntu Security Notice: json-c vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-4360-1" }, { "title": "Ubuntu Security Notice: json-c vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-4360-4" }, { "title": "Debian CVElist Bug Report Logs: json-c: CVE-2020-12762", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=136719ded61e273212f821541d12e175" }, { "title": "Debian Security Advisories: DSA-4741-1 json-c -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=20b6b384fb69b76b5f17fc7ea1278139" }, { "title": "Red Hat: Moderate: libfastjson security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20236431 - security advisory" }, { "title": "Amazon Linux AMI: ALAS-2020-1381", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2020-1381" }, { "title": "Amazon Linux 2: ALAS2-2020-1442", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2020-1442" }, { "title": "Amazon Linux 2: ALAS2-2023-2079", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2023-2079" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2020-12762 log" }, { "title": "Red Hat: Moderate: Release of OpenShift Serverless 1.20.0", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220434 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat OpenShift distributed tracing 2.1.0 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220318 - security advisory" }, { "title": "Red Hat: Important: Release of containers for OSP 16.2 director operator tech preview", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220842 - security advisory" }, { "title": "Red Hat: Moderate: Gatekeeper Operator v0.2 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221081 - security advisory" }, { "title": "Red Hat: Important: Red Hat OpenShift GitOps security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220580 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.2.11 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220856 - security advisory" }, { "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.5.4 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221396 - security advisory" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d" }, { "title": "clamav-win32", "trust": 0.1, "url": "https://github.com/clamwin/clamav-win32 " }, { "title": "", "trust": 0.1, "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-12762" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-190", "trust": 1.0 }, { "problemtype": "CWE-787", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2020-12762" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.2, "url": "https://usn.ubuntu.com/4360-1/" }, { "trust": 1.1, "url": "https://github.com/json-c/json-c/pull/592" }, { "trust": 1.1, "url": "https://github.com/rsyslog/libfastjson/issues/161" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2020/05/msg00032.html" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2020/05/msg00034.html" }, { "trust": 1.1, "url": "https://usn.ubuntu.com/4360-4/" }, { "trust": 1.1, "url": "https://security.gentoo.org/glsa/202006-13" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2020/07/msg00031.html" }, { "trust": 1.1, "url": "https://www.debian.org/security/2020/dsa-4741" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20210521-0001/" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2023/06/msg00023.html" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/cqqrrgbqcawnccj2hn3w5sscz4qgmxqi/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/cbr36ixybhitazfb5pfbjted22wo5onb/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/w226tscjbeoxdufvknwnh7etg7ar6mcs/" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.9, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.6, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.6, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.6, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.3, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=1835253" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.2, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35522" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35524" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35521" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-44228" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-21409" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35523" }, { "trust": 0.2, "url": "https://issues.jboss.org/):" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20317" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43267" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-4122" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/787.html" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/190.html" }, { "trust": 0.1, "url": "https://github.com/clamwin/clamav-win32" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5128" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5129" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36331" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3984" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3521" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4193" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3572" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3872" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0842" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3426" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3200" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3445" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33574" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4019" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4192" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25315" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25236" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21684" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25235" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23308" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4154" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22822" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22827" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0392" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0261" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0920" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22826" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3999" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25709" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22817" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0413" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0847" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1396" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-45960" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36221" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22825" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0532" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-46143" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22942" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0330" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22816" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21684" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0778" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0359" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0318" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25709" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25648" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36385" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-34558" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0512" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29923" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20317" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4914" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3656" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28950" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3757" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4848" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3948" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3620" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2024:0411" }, { "trust": 0.1, "url": "https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_0411.json" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:6431" }, { "trust": 0.1, "url": "https://access.redhat.com/security/data/csaf/v2/advisories/2023/rhsa-2023_6431.json" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.3_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_1086.json" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2024:1086" }, { "trust": 0.1, "url": "https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_1154.json" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2024:1154" } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-12762" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "166308" }, { "db": "PACKETSTORM", "id": "166789" }, { "db": "PACKETSTORM", "id": "165135" }, { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "176732" }, { "db": "PACKETSTORM", "id": "175527" }, { "db": "PACKETSTORM", "id": "177428" }, { "db": "PACKETSTORM", "id": "177472" }, { "db": "NVD", "id": "CVE-2020-12762" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2020-12762" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "165288" }, { "db": "PACKETSTORM", "id": "166308" }, { "db": "PACKETSTORM", "id": "166789" }, { "db": "PACKETSTORM", "id": "165135" }, { "db": "PACKETSTORM", "id": "165099" }, { "db": "PACKETSTORM", "id": "176732" }, { "db": "PACKETSTORM", "id": "175527" }, { "db": "PACKETSTORM", "id": "177428" }, { "db": "PACKETSTORM", "id": "177472" }, { "db": "NVD", "id": "CVE-2020-12762" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-05-09T00:00:00", "db": "VULMON", "id": "CVE-2020-12762" }, { "date": "2021-12-15T15:20:33", "db": "PACKETSTORM", "id": "165286" }, { "date": "2021-12-15T15:22:36", "db": "PACKETSTORM", "id": "165288" }, { "date": "2022-03-15T15:41:45", "db": "PACKETSTORM", "id": "166308" }, { "date": "2022-04-20T15:12:33", "db": "PACKETSTORM", "id": "166789" }, { "date": "2021-12-03T16:41:45", "db": "PACKETSTORM", "id": "165135" }, { "date": "2021-11-30T14:44:48", "db": "PACKETSTORM", "id": "165099" }, { "date": "2024-01-26T15:22:22", "db": "PACKETSTORM", "id": "176732" }, { "date": "2023-11-13T20:56:29", "db": "PACKETSTORM", "id": "175527" }, { "date": "2024-03-05T14:30:35", "db": "PACKETSTORM", "id": "177428" }, { "date": "2024-03-06T17:07:07", "db": "PACKETSTORM", "id": "177472" }, { "date": "2020-05-09T18:15:11.283000", "db": "NVD", "id": "CVE-2020-12762" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2020-12762" }, { "date": "2023-11-07T03:15:44.277000", "db": "NVD", "id": "CVE-2020-12762" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2021-5128-06", "sources": [ { "db": "PACKETSTORM", "id": "165286" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "overflow", "sources": [ { "db": "PACKETSTORM", "id": "176732" }, { "db": "PACKETSTORM", "id": "175527" }, { "db": "PACKETSTORM", "id": "177428" }, { "db": "PACKETSTORM", "id": "177472" } ], "trust": 0.4 } }
var-202012-1420
Vulnerability from variot
The package ua-parser-js before 0.7.23 are vulnerable to Regular Expression Denial of Service (ReDoS) in multiple regexes (see linked commit for more info). ua-parser-js Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be in a state
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202012-1420", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "ua-parser-js", "scope": "lt", "trust": 1.0, "vendor": "ua parser js", "version": "0.7.23" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "ua-parser-js", "scope": "eq", "trust": 0.8, "vendor": "faisalman", "version": "0.7.23" }, { "model": "ua-parser-js", "scope": "eq", "trust": 0.8, "vendor": "faisalman", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "db": "NVD", "id": "CVE-2020-7793" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:ua-parser-js_project:ua-parser-js:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "0.7.23", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-7793" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Siemens reported these vulnerabilities to CISA.", "sources": [ { "db": "CNNVD", "id": "CNNVD-202012-978" } ], "trust": 0.6 }, "cve": "CVE-2020-7793", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 5.0, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-7793", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 2.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "OTHER", "availabilityImpact": "High", "baseScore": 7.5, "baseSeverity": "High", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "JVNDB-2020-014179", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-7793", "trust": 1.8, "value": "HIGH" }, { "author": "report@snyk.io", "id": "CVE-2020-7793", "trust": 1.0, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202012-978", "trust": 0.6, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2020-7793", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-7793" }, { "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "db": "NVD", "id": "CVE-2020-7793" }, { "db": "NVD", "id": "CVE-2020-7793" }, { "db": "CNNVD", "id": "CNNVD-202012-978" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The package ua-parser-js before 0.7.23 are vulnerable to Regular Expression Denial of Service (ReDoS) in multiple regexes (see linked commit for more info). ua-parser-js Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be in a state", "sources": [ { "db": "NVD", "id": "CVE-2020-7793" }, { "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "db": "VULMON", "id": "CVE-2020-7793" } ], "trust": 1.71 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-7793", "trust": 3.3 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 1.5 }, { "db": "JVN", "id": "JVNVU99475301", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-014179", "trust": 0.8 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2555", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022052615", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202012-978", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2020-7793", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-7793" }, { "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "db": "NVD", "id": "CVE-2020-7793" }, { "db": "CNNVD", "id": "CNNVD-202012-978" } ] }, "id": "VAR-202012-1420", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:43:56.929000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Fix\u00a0ReDoS\u00a0vulnerabilities\u00a0reported\u00a0by\u00a0Snyk", "trust": 0.8, "url": "https://github.com/faisalman/ua-parser-js/commit/6d1f26df051ba681463ef109d36c9cf0f7e32b18" }, { "title": "ua-parser-js Remediation of resource management error vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=137311" }, { "title": "awesome-redos-security", "trust": 0.1, "url": "https://github.com/engn33r/awesome-redos-security " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-7793" }, { "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "db": "CNNVD", "id": "CNNVD-202012-978" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "NVD-CWE-Other", "trust": 1.0 }, { "problemtype": "Resource exhaustion (CWE-400) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "db": "NVD", "id": "CVE-2020-7793" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.7, "url": "https://github.com/faisalman/ua-parser-js/commit/6d1f26df051ba681463ef109d36c9cf0f7e32b18" }, { "trust": 1.7, "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbowergithubfaisalman-1050388" }, { "trust": 1.7, "url": "https://snyk.io/vuln/snyk-js-uaparserjs-1023599" }, { "trust": 1.7, "url": "https://snyk.io/vuln/snyk-java-orgwebjarsnpm-1050387" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7793" }, { "trust": 0.9, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.8, "url": "http://jvn.jp/vu/jvnvu99475301/index.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022052615" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2555" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-7793" }, { "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "db": "NVD", "id": "CVE-2020-7793" }, { "db": "CNNVD", "id": "CNNVD-202012-978" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2020-7793" }, { "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "db": "NVD", "id": "CVE-2020-7793" }, { "db": "CNNVD", "id": "CNNVD-202012-978" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-12-11T00:00:00", "db": "VULMON", "id": "CVE-2020-7793" }, { "date": "2021-08-04T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "date": "2020-12-11T14:15:11.283000", "db": "NVD", "id": "CVE-2020-7793" }, { "date": "2020-12-11T00:00:00", "db": "CNNVD", "id": "CNNVD-202012-978" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-13T00:00:00", "db": "VULMON", "id": "CVE-2020-7793" }, { "date": "2022-09-20T05:31:00", "db": "JVNDB", "id": "JVNDB-2020-014179" }, { "date": "2022-09-13T21:23:36.800000", "db": "NVD", "id": "CVE-2020-7793" }, { "date": "2022-09-19T00:00:00", "db": "CNNVD", "id": "CNNVD-202012-978" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202012-978" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "ua-parser-js\u00a0 Resource exhaustion vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-014179" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202012-978" } ], "trust": 0.6 } }
var-202102-1492
Vulnerability from variot
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Lodash Exists in unspecified vulnerabilities.Service operation interruption (DoS) It may be in a state. lodash is an open source JavaScript utility library. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or manufacturer announcements. Description:
The ovirt-engine package provides the manager for virtualization environments. This manager enables admins to define hosts and networks, as well as to add storage, create VMs and manage user permissions.
Bug Fix(es):
-
This release adds the queue attribute to the virtio-scsi driver in the virtual machine configuration. This improvement enables multi-queue performance with the virtio-scsi driver. (BZ#911394)
-
With this release, source-load-balancing has been added as a new sub-option for xmit_hash_policy. It can be configured for bond modes balance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying xmit_hash_policy=vlan+srcmac. (BZ#1683987)
-
The default DataCenter/Cluster will be set to compatibility level 4.6 on new installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)
-
With this release, support has been added for copying disks between regular Storage Domains and Managed Block Storage Domains. It is now possible to migrate disks between Managed Block Storage Domains and regular Storage Domains. (BZ#1906074)
-
Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was set by default to false and was supposed to be uses in cluster compatibility levels below 4.4. The value was set to general version. With this release, each cluster level has it's own value, defaulting to false for 4.4 and above. This will reduce unnecessary overhead in removing time outs of the file system freeze command. (BZ#1932284)
-
With this release, running virtual machines is supported for up to 16TB of RAM on x86_64 architectures. (BZ#1944723)
-
This release adds the gathering of oVirt/RHV related certificates to allow easier debugging of issues for faster customer help and issue resolution. Information from certificates is now included as part of the sosreport. Note that no corresponding private key information is gathered, due to security considerations. (BZ#1845877)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/2974891
- Bugs fixed (https://bugzilla.redhat.com/):
1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine 1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors 1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain 1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine 1717411 - improve engine logging when migration fail 1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs 1775145 - Incorrect message from hot-plugging memory 1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. 1845877 - [RFE] Collect information about RHV PKI 1875363 - engine-setup failing on FIPS enabled rhel8 machine 1906074 - [RFE] Support disks copy between regular and managed block storage domains 1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration 1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning 1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. 1919984 - engine-setup failse to deploy the grafana service in an external DWH server 1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal 1926018 - Failed to run VM after FIPS mode is enabled 1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing 'rsyslog-gnutls' package. 1928158 - Rename 'CA Certificate' link in welcome page to 'Engine CA certificate' 1928188 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1929211 - Failed to parse 'writeOps' value 'XXXX' to integer: For input string: "XXXX" 1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error "missing groups or modules: virt:8.4" 1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful 1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured 1932284 - Engine handled FS freeze is not fast enough for Windows systems 1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed 1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2 1943267 - Snapshot creation is failing for VM having vGPU. 1944723 - [RFE] Support virtual machines with 16TB memory 1948577 - [welcome page] remove "Infrastructure Migration" section (obsoleted) 1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule 1949547 - rhv-log-collector-analyzer report contains 'b characters 1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6 1950466 - Host installation failed 1954401 - HP VMs pinning is wiped after edit->ok and pinned to first physical CPUs. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.8.2 bug fix and security update Advisory ID: RHSA-2021:2438-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2021:2438 Issue date: 2021-07-27 CVE Names: CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 CVE-2021-33909 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.8.2 is now available with updates to packages and images that fix several bugs and add enhancements.
This release includes a security update for Red Hat OpenShift Container Platform 4.8.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.8.2. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2021:2437
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html
Security Fix(es):
-
SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32) (CVE-2016-2183)
-
gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
-
nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)
-
etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)
-
etcd: DoS in wal/wal.go (CVE-2020-15112)
-
etcd: directories created via os.MkdirAll are not checked for permissions (CVE-2020-15113)
-
etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS (CVE-2020-15114)
-
etcd: no authentication is performed against endpoints provided in the
-
--endpoints flag (CVE-2020-15136)
-
jwt-go: access restriction bypass vulnerability (CVE-2020-26160)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
golang: crypto/elliptic: incorrect operations on the P-224 curve (CVE-2021-3114)
-
containernetworking-cni: Arbitrary path injection via type field in CNI configuration (CVE-2021-20206)
-
containers/storage: DoS via malicious image (CVE-2021-20291)
-
prometheus: open redirect under the /new endpoint (CVE-2021-29622)
-
golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)
-
go.elastic.co/apm: leaks sensitive HTTP headers during panic (CVE-2021-22133)
Space precludes listing in detail the following additional CVEs fixes: (CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382), (CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and (CVE-2021-23368)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-x86_64
The image digest is ssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-s390x
The image digest is sha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le
The image digest is sha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f
All OpenShift Container Platform 4.8 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor
- Solution:
For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster - -cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)
1725981 - oc explain does not work well with full resource.group names
1747270 - [osp] Machine with name "operator-sdk init --help
1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard
1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert
1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions
1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host
1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions
1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go
1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS
1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag
1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method
1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics
1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly
1872659 - ClusterAutoscaler doesn't scale down when a node is not needed anymore
1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack
1873649 - proxy.config.openshift.io should validate user inputs
1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials
1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable
1876918 - scheduler test leaves taint behind
1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1
1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable
1878685 - Ingress resource with "Passthrough" annotation does not get applied when using the newer "networking.k8s.io/v1" API
1879077 - Nodes tainted after configuring additional host iface
1879140 - console auth errors not understandable by customers
1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens
1879184 - CVO must detect or log resource hotloops
1879495 - [4.6] namespace \“openshift-user-workload-monitoring\” does not exist”
1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string
1879944 - [OCP 4.8] Slow PV creation with vsphere
1880757 - AWS: master not removed from LB/target group when machine deleted
1880758 - Component descriptions in cloud console have bad description (Managed by Terraform)
1881210 - nodePort for router-default metrics with NodePortService does not exist
1881481 - CVO hotloops on some service manifests
1881484 - CVO hotloops on deployment manifests
1881514 - CVO hotloops on imagestreams from cluster-samples-operator
1881520 - CVO hotloops on (some) clusterrolebindings
1881522 - CVO hotloops on clusterserviceversions packageserver
1881662 - Error getting volume limit for plugin kubernetes.io/oc image extract
1904505 - Excessive Memory Use in Builds
1904507 - vsphere-problem-detector: implement missing metrics
1904558 - Random init-p error when trying to start pod
1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests
1905147 - ConsoleQuickStart Card's prerequisites is a combined text instead of a list
1905159 - Installation on previous unused dasd fails after formatting
1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory
1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails
1905577 - Control plane machines not adopted when provisioning network is disabled
1905627 - Warn users when using an unsupported browser such as IE
1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP
1905849 - Default volumesnapshotclass should be created when creating default storageclass
1906056 - Bundles skipped via the skips
field cannot be pinned
1906102 - CBO produces standard metrics
1906147 - ironic-rhcos-downloader should not use --insecure
1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart
1906740 - [aws]Machine should be "Failed" when creating a machine with invalid region
1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage
1907315 - the internal load balancer annotation for AWS should use "true" instead of "0.0.0.0/0" as value
1907353 - [4.8] OVS daemonset is wasting resources even though it doesn't do anything
1907614 - Update kubernetes deps to 1.20
1908068 - Enable DownwardAPIHugePages feature gate
1908169 - The example of Import URL is "Fedora cloud image list" for all templates.
1908170 - sriov network resource injector: Hugepage injection doesn't work with mult container
1908343 - Input labels in Manage columns modal should be clickable
1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures
1908655 - "Evaluating rule failed" for "record: node:node_num_cpu:sum" rule
1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes
1908765 - [SCALE] enable OVN lflow data path groups
1908774 - [SCALE] enable OVN DB memory trimming on compaction
1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it
1909091 - Pod/node/ip/template isn't showing when vm is running
1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error
1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing
1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade
1910067 - UPI: openstacksdk fails on "server group list"
1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing
1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn't match node selector: AWS compute machines without status
1910378 - socket timeouts for webservice communication between pods
1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling
1910500 - Could not list CSI provisioner on web when create storage class on GCP platform
1911211 - Should show the cert-recovery-controller version correctly
1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames
1912571 - libvirt: Support setting dnsmasq options through the install config
1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade
1913112 - BMC details should be optional for unmanaged hosts
1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag
1913341 - GCP: strange cluster behavior in CI run
1913399 - switch to v1beta1 for the priority and fairness APIs
1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint
1913532 - After a 4.6 to 4.7 upgrade, a node went unready
1913974 - snapshot test periodically failing with "can't open '/mnt/test/data': No such file or directory"
1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs
1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root
1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20
1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names
1915693 - Not able to install gpu-operator on cpumanager enabled node.
1915971 - Role and Role Binding breadcrumbs do not work as expected
1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page
1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall
1916392 - scrape priority and fairness endpoints for must-gather
1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form
1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with "Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready"
1916553 - Default template's description is empty on details tab
1916593 - Destroy cluster sometimes stuck in a loop
1916872 - need ability to reconcile exgw annotations on pod add
1916890 - [OCP 4.7] api or api-int not available during installation
1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs.
1917282 - [Migration] MCO stucked for rhel worker after enable the migration prepare state
1917328 - It should default to current namespace when create vm from template action on details page
1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with "cannot go from state 'deploy failed' to state 'manageable'"
1917485 - [oVirt] ovirt machine/machineset object has missing some field validations
1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube.
1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3
1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library
1918101 - [vsphere]Delete Provisioning machine took about 12 minutes
1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass
1918442 - Service Reject ACL does not work on dualstack
1918723 - installer fails to write boot record on 4k scsi lun on s390x
1918729 - Add hide/reveal button for the token field in the KMS configuration page
1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve
1918785 - Pod request and limit calculations in console are incorrect
1918910 - Scale from zero annotations should not requeue if instance type missing
1919032 - oc image extract - will not extract files from image rootdir - "error: unexpected directory from mapping tests.test"
1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0
1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone
1919168 - oc adm catalog mirror
doesn't work for the air-gapped cluster
1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize
1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster
1919356 - Add missing profile annotation in cluster-update-keys manifests
1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration
1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic
1919406 - OperatorHub filter heading "Provider Type" should be "Source"
1919737 - hostname lookup delays when master node down
1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade
1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests
1920300 - cri-o does not support configuration of stream idle time
1920307 - "VM not running" should be "Guest agent required" on vm details page in dev console
1920532 - Problem in trying to connect through the service to a member that is the same as the caller.
1920677 - Various missingKey errors in the devconsole namespace
1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources
1920901 - [4.7]"500 Internal Error" for prometheus route in https_proxy cluster
1920903 - oc adm top reporting unknown status for Windows node
1920905 - Remove DNS lookup workaround from cluster-api-provider
1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard
1921184 - kuryr-cni binds to wrong interface on machine with two interfaces
1921227 - Fix issues related to consuming new extensions in Console static plugins
1921264 - Bundle unpack jobs can hang indefinitely
1921267 - ResourceListDropdown not internationalized
1921321 - SR-IOV obliviously reboot the node
1921335 - ThanosSidecarUnhealthy
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel]
1921763 - operator registry has high memory usage in 4.7... cleanup row closes
1921778 - Push to stage now failing with semver issues on old releases
1921780 - Search page not fully internationalized
1921781 - DefaultList component not internationalized
1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes
1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often
1921892 - MAO: controller runtime manager closes event recorder
1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated
1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label
1921953 - ClusterServiceVersion property inference does not infer package and version
1922063 - "Virtual Machine" should be "Templates" in template wizard
1922065 - Rootdisk size is default to 15GiB in customize wizard
1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch
1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted
1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt
1922646 - Panic in authentication-operator invoking webhook authorization
1922648 - FailedCreatePodSandBox due to "failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists"
1922764 - authentication operator is degraded due to number of kube-apiservers
1922992 - some button text on YAML sidebar are not translated
1922997 - [Migration]The SDN migration rollback failed.
1923038 - [OSP] Cloud Info is loaded twice
1923157 - Ingress traffic performance drop due to NodePort services
1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set.
1923811 - Registry claims Available=True despite .status.readyReplicas == 0 while .spec.replicas == 2
1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors
1923984 - Incorrect anti-affinity for UWM prometheus
1924020 - panic: runtime error: index out of range [0] with length 0
1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true
1924083 - "Activity" Pane of Persistent Storage tab shows events related to Noobaa too
1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable
1924171 - ovn-kube must handle single-stack to dual-stack migration
1924358 - metal UPI setup fails, no worker nodes
1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument
1924536 - 'More about Insights' link points to support link
1924585 - "Edit Annotation" are not correctly translated in Chinese
1924586 - Control Plane status and Operators status are not fully internationalized
1924641 - [User Experience] The message "Missing storage class" needs to be displayed after user clicks Next and needs to be rephrased
1924663 - Insights operator should collect related pod logs when operator is degraded
1924701 - Cluster destroy fails when using byo with Kuryr
1924728 - Difficult to identify deployment issue if the destination disk is too small
1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086)
1924747 - InventoryItem doesn't internationalize resource kind
1924788 - Not clear error message when there are no NADs available for the user
1924816 - Misleading error messages in ironic-conductor log
1924869 - selinux avc deny after installing OCP 4.7
1924916 - PVC reported as Uploading when it is actually cloning
1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces
1924953 - newly added 'excessive etcd leader changes' test case failing in serial job
1924968 - Monitoring list page filter options are not translated
1924983 - some components in utils directory not localized
1925017 - [UI] VM Details-> Network Interfaces, 'Name,' is displayed instead on 'Name'
1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -> 4.7 upgrade due to series churn
1925083 - Some texts are not marked for translation on idp creation page.
1925087 - Add i18n support for the Secret page
1925148 - Shouldn't create the redundant imagestream when use oc new-app --name=testapp2 -i
with exist imagestream
1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard
1925216 - openshift installer fails immediately failed to fetch Install Config
1925236 - OpenShift Route targets every port of a multi-port service
1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload's service
1925261 - Items marked as mandatory in KMS Provider form are not enforced
1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot
1925343 - [ci] e2e-metal tests are not using reserved instances
1925493 - Enable snapshot e2e tests
1925586 - cluster-etcd-operator is leaking transports
1925614 - Error: InstallPlan.operators.coreos.com not found
1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers
1926029 - [RFE] Either disable save or give warning when no disks support snapshot
1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists.
1926072 - Close button (X) does not work in the new "Storage cluster exists" Warning alert message(introduced via fix for Bug 1867400)
1926082 - Insights operator should not go degraded during upgrade
1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized
1926115 - Texts in “Insights” popover on overview page are not marked for i18n
1926123 - Pseudo bug: revert "force cert rotation every couple days for development" in 4.7
1926126 - some kebab/action menu translation issues
1926131 - Add HPA page is not fully internationalized
1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it
1926154 - Create new pool with arbiter - wrong replica
1926278 - [oVirt] consume K8S 1.20 packages
1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning
1926285 - ignore pod not found status messages
1926289 - Accessibility: Modal content hidden from screen readers
1926310 - CannotRetrieveUpdates alerts on Critical severity
1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus.
1926336 - Service details can overflow boxes at some screen widths
1926346 - move to go 1.15 and registry.ci.openshift.org
1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM
1926465 - bootstrap kube-apiserver does not have --advertise-address set – was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn't have IPv6 endpoints
1926484 - API server exits non-zero on 2 SIGTERM signals
1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag
1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log
1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results
1926776 - "Template support" modal appears when select the RHEL6 common template
1926835 - [e2e][automation] prow gating use unsupported CDI version
1926843 - pipeline with finally tasks status is improper
1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade
1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment "lost" the resources
section.
1926903 - NTO may fail to disable stalld when relying on Tuned '[service]' plugin
1926931 - Inconsistent ovs-flow rule on one of the app node for egress node
1926943 - vsphere-problem-detector: Alerts in CI jobs
1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs
1927013 - Tables don't render properly at smaller screen widths
1927017 - CCO does not relinquish leadership when restarting for proxy CA change
1927042 - Empty static pod files on UPI deployments are confusing
1927047 - multiple external gateway pods will not work in ingress with IP fragmentation
1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64
1927075 - [e2e][automation] Fix pvc string in pvc.view
1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page
1927244 - UPI installation with Kuryr timing out on bootstrap stage
1927263 - kubelet service takes around 43 secs to start container when started from stopped state
1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver
1927310 - Performance: Console makes unnecessary requests for en-US messages on load
1927340 - Race condition in OperatorCondition reconcilation
1927366 - OVS configuration service unable to clone NetworkManager's connections in the overlay FS
1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady
1927393 - 4.7 still points to 4.6 catalog images
1927397 - p&f: add auto update for priority & fairness bootstrap configuration objects
1927423 - Happy "Not Found" and no visible error messages on error-list page when /silences 504s
1927465 - Homepage dashboard content not internationalized
1927678 - Reboot interface defaults to softPowerOff so fencing is too slow
1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev
1927797 - 'Pod(s)' should be included in the pod donut label when a horizontal pod autoscaler is enabled
1927882 - Can't create cluster role binding from UI when a project is selected
1927895 - global RuntimeConfig is overwritten with merge result
1927898 - i18n Admin Notifier
1927902 - i18n Cluster Utilization dashboard duration
1927903 - "CannotRetrieveUpdates" - critical error in openshift web console
1927925 - Manually misspelled as Manualy
1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array
1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart
1927944 - cluster version operator cycles terminating state waiting for leader election
1927993 - Documentation Links in OKD Web Console are not Working
1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode
1928045 - N+1 scaling Info message says "single zone" even if the nodes are spread across 2 or 0 zones
1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV
1928157 - 4.7 CNO claims to be done upgrading before it even starts
1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured
1928297 - HAProxy fails with 500 on some requests
1928473 - NetworkManager overlay FS not being created on None platform
1928512 - sap license management logs gatherer
1928537 - Cannot IPI with tang/tpm disk encryption
1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS
1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release
1928850 - Unable to pull images due to limited quota on Docker Hub
1928851 - manually creating NetNamespaces will break things and this is not obvious
1928867 - golden images - DV should not be created with WaitForFirstConsumer
1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1
1928875 - Update translations
1928893 - Memory Pressure Drop Down Info is stating "Disk" capacity is low instead of memory
1928931 - DNSRecord CRD is using deprecated v1beta1 API
1928937 - CVE-2021-23337 nodejs-lodash: command injection via template
1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions
1929052 - Add new Jenkins agent maven dir for 3.6
1929056 - kube-apiserver-availability.rules are failing evaluation
1929110 - LoadBalancer service check test fails during vsphere upgrade
1929136 - openshift isn't able to mount nfs manila shares to pods
1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner
1929243 - Namespace column missing in Nodes Node Details / pods tab
1929277 - Monitoring workloads using too high a priorityclass
1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1
1929314 - ovn-kubernetes endpoint slice controller doesn't run on CI jobs
1929359 - etcd-quorum-guard uses origin-cli [4.8]
1929577 - Edit Application action overwrites Deployment envFrom values on save
1929654 - Registry for Azure uses legacy V1 StorageAccount
1929693 - Pod stuck at "ContainerCreating" status
1929733 - oVirt CSI driver operator is constantly restarting
1929769 - Getting 404 after switching user perspective in another tab and reload Project details
1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow
1929824 - fix alerting on volume name check for vsphere
1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade
1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost
1930007 - filter dropdown item filter and resource list dropdown item filter doesn't support multi selection
1930015 - OS list is overlapped by buttons in template wizard
1930064 - Web console crashes during VM creation from template when no storage classes are defined
1930220 - Cinder CSI driver is not able to mount volumes under heavier load
1930240 - Generated clouds.yaml incomplete when provisioning network is disabled
1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console
1930268 - intel vfio devices are not expose as resources
1930356 - Darwin binary missing from mirror.openshift.com
1930393 - Gather info about unhealthy SAP pods
1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console
1930570 - Jenkins templates are displayed in Developer Catalog twice
1930620 - the logLevel field in containerruntimeconfig can't be set to "trace"
1930631 - Image local-storage-mustgather in the doc does not come from product registry
1930893 - Backport upstream patch 98956 for pod terminations
1931005 - Related objects page doesn't show the object when its name is empty
1931103 - remove periodic log within kubelet
1931115 - Azure cluster install fails with worker type workers Standard_D4_v2
1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups
1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS
1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy
1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start of the Keepalived container
1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails
1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses)
1931629 - Conversational Hub Fails due to ImagePullBackOff
1931637 - Kubeturbo Operator fails due to ImagePullBackOff
1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race.
1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint
1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods
1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently
1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff
1931949 - Red Hat Integration Camel-K Operator keeps stuck in Pending state
1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6
1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7
1932001 - Only one of multiple subscriptions to the same package is honored
1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown
1932105 - machine-config ClusterOperator claims level while control-plane still updating
1932133 - AWS EBS CSI Driver doesn’t support “csi.storage.k8s.io/fsTyps” parameter
1932135 - When “iopsPerGB” parameter is not set, event for AWS EBS CSI Driver provisioning is not clear
1932152 - When “iopsPerGB” parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear
1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors
1932182 - catalog operator causing CPU spikes and bad etcd performance
1932229 - Can’t find kubelet metrics for aws ebs csi volumes
1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered
1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the "allowedIframeHostnames" option can lead to bypass hostname whitelist for iframe element
1932324 - CRIO fails to create a Pod in sandbox stage - starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \"\n"
1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation
1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new "canary" route
1932453 - Update Japanese timestamp format
1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue
1932487 - [OKD] origin-branding manifest is missing cluster profile annotations
1932502 - Setting MTU for a bond interface using Kernel arguments is not working
1932618 - Alerts during a test run should fail the test job, but were not
1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be
1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy
1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made
1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network
1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM
1932805 - e2e: test OAuth API connections in the tests by that name
1932816 - No new local storage operator bundle image is built
1932834 - enforce the use of hashed access/authorize tokens
1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console
1933102 - Canary daemonset uses default node selector
1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal]
1933159 - multus DaemonSets should use maxUnavailable: 33%
1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10%
1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10%
1933179 - network-check-target DaemonSet should use maxUnavailable: 10%
1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10%
1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10%
1933263 - user manifest with nodeport services causes bootstrap to block
1933269 - Cluster unstable replacing an unhealthy etcd member
1933284 - Samples in CRD creation are ordered arbitarly
1933414 - Machines are created with unexpected name for Ports
1933599 - bump k8s.io/apiserver to 1.20.3
1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like ":"
1933664 - Getting Forbidden for image in a container template when creating a sample app
1933708 - Grafana is not displaying deployment config resources in dashboard Default /Kubernetes / Compute Resources / Namespace (Workloads)
1933711 - EgressDNS: Keep short lived records at most 30s
1933730 - [AI-UI-Wizard] Toggling "Use extra disks for local storage" checkbox highlights the "Next" button to move forward but grays out once clicked
1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively
1933772 - MCD Crash Loop Backoff
1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior
1933857 - Details page can throw an uncaught exception if kindObj prop is undefined
1933880 - Kuryr-Controller crashes when it's missing the status object
1934021 - High RAM usage on machine api termination node system oom
1934071 - etcd consuming high amount of memory and CPU after upgrade to 4.6.17
1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade
1934085 - Scheduling conformance tests failing in a single node cluster
1934107 - cluster-authentication-operator builds URL incorrectly for IPv6
1934112 - Add memory and uptime metadata to IO archive
1934113 - mcd panic when there's not enough free disk space
1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP
1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh
1934174 - rootfs too small when enabling NBDE
1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3
1934177 - knative-camel-operator CreateContainerError "container_linux.go:366: starting container process caused: chdir to cwd (\"/home/nonroot\") set in config.json failed: permission denied"
1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0
1934229 - List page text filter has input lag
1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions
1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady
1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods
1934556 - OCP-Metal images
1934557 - RHCOS boot image bump for LUKS fixes
1934643 - Need BFD failover capability on ECMP routes
1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10%
1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP)
1934905 - CoreDNS's "errors" plugin is not enabled for custom upstream resolvers
1935058 - Can’t finish install sts clusters on aws government region
1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login
1935155 - IGMP/MLD packets being dropped
1935157 - [e2e][automation] environment tests broken
1935165 - OCP 4.6 Build fails when filename contains an umlaut
1935176 - Missing an indication whether the deployed setup is SNO.
1935269 - Topology operator group shows child Jobs. Not shown in details view's resources.
1935419 - Failed to scale worker using virtualmedia on Dell R640
1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting
1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7
1935541 - console operator panics in DefaultDeployment with nil cm
1935582 - prometheus liveness probes cause issues while replaying WAL
1935604 - high CPU usage fails ingress controller
1935667 - pipelinerun status icon rendering issue
1935706 - test: Detect when the master pool is still updating after upgrade
1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data]
1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text
1935909 - New CSV using ServiceAccount named "default" stuck in Pending during upgrade
1936022 - DNS operator performs spurious updates in response to API's defaulting of daemonset's terminationGracePeriod and service's clusterIPs
1936030 - Ingress operator performs spurious updates in response to API's defaulting of NodePort service's clusterIPs field
1936223 - The IPI installer has a typo. It is missing the word "the" in "the Engine".
1936336 - Updating multus-cni builder & base images to be consistent with ART 4.8 (closed)
1936342 - kuryr-controller restarting after 3 days cluster running - pools without members
1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623
1936488 - [sig-instrumentation][Late] Alerts shouldn't report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error
1936515 - sdn-controller is missing some health checks
1936534 - When creating a worker with a used mac-address stuck on registering
1936585 - configure alerts if the catalogsources are missing
1936620 - OLM checkbox descriptor renders switch instead of checkbox
1936721 - network-metrics-deamon not associated with a priorityClassName
1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear
1936785 - Configmap gatherer doesn't include namespace name (in the archive path) in case of a configmap with binary data
1936788 - RBD RWX PVC creation with Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection
1936798 - Authentication log gatherer shouldn't scan all the pod logs in the openshift-authentication namespace
1936801 - Support ServiceBinding 0.5.0+
1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow
1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies
1936859 - ovirt 4.4 -> 4.5 upgrade jobs are permafailing
1936867 - Periodic vsphere IPI install is broken - missing pip
1936871 - [Cinder CSI] Topology aware provisioning doesn't work when Nova and Cinder AZs are different
1936904 - Wrong output YAML when syncing groups without --confirm
1936983 - Topology view - vm details screen isntt stop loading
1937005 - when kuryr quotas are unlimited, we should not sent alerts
1937018 - FilterToolbar component does not handle 'null' value for 'rowFilters' prop
1937020 - Release new from image stream chooses incorrect ID based on status
1937077 - Blank White page on Topology
1937102 - Pod Containers Page Not Translated
1937122 - CAPBM changes to support flexible reboot modes
1937145 - [Local storage] PV provisioned by localvolumeset stays in "Released" status after the pod/pvc deleted
1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes
1937244 - [Local Storage] The model name of aws EBS doesn't be extracted well
1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes
1937452 - cluster-network-operator CI linting fails in master branch
1937459 - Wrong Subnet retrieved for Service without Selector
1937460 - [CI] Network quota pre-flight checks are failing the installation
1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster
1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation
1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint
1937535 - Not all image pulls within OpenShift builds retry
1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes
1937627 - Bump DEFAULT_DOC_URL for 4.8
1937628 - Bump upgrade channels for 4.8
1937658 - Description for storage class encryption during storagecluster creation needs to be updated
1937666 - Mouseover on headline
1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage
1937693 - ironic image "/" cluttered with files
1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController
1937717 - If browser default font size is 20, the layout of template screen breaks
1937722 - OCP 4.8 vuln due to BZ 1936445
1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator
1937941 - [RFE]fix wording for favorite templates
1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations
1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go
1938321 - Cannot view PackageManifest objects in YAML on 'Home > Search' page nor 'CatalogSource details > Operators tab'
1938465 - thanos-querier should set a CPU request on the thanos-query container
1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container
1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them
1938468 - kube-scheduler-operator has a container without a CPU request
1938492 - Marketplace extract container does not request CPU or memory
1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not
1938636 - Can't set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller
1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph
1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10%
1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances
1938949 - [VPA] Updater failed to trigger evictions due to "vpa-admission-controller" not found
1939054 - machine healthcheck kills aws spot instance before generated
1939060 - CNO: nodes and masters are upgrading simultaneously
1939069 - Add source to vm template silently failed when no storage class is defined in the cluster
1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string
1939168 - Builds failing for OCP 3.11 since PR#25 was merged
1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz
1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez
1939232 - CI tests using openshift/hello-world broken by Ruby Version Update
1939270 - fix co upgradeableFalse status and reason
1939294 - OLM may not delete pods with grace period zero (force delete)
1939412 - missed labels for thanos-ruler pods
1939485 - CVE-2021-20291 containers/storage: DoS via malicious image
1939547 - Include container="POD" in resource queries
1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0
1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated
1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs
1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent
1939661 - support new AWS region ap-northeast-3
1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution
1939731 - Image registry operator reports unavailable during normal serial run
1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters
1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase
1939752 - ovnkube-master sbdb container does not set requests on cpu or memory
1939753 - Delete HCO is stucking if there is still VM in the cluster
1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page
1939853 - [DOC] Creating manifests API should not allow folder in the "file_name"
1939865 - GCP PD CSI driver does not have CSIDriver instance
1939869 - [e2e][automation] Add annotations to datavolume for HPP
1939873 - Unlimited number of characters accepted for base domain name
1939943 - cluster-kube-apiserver-operator check-endpoints
observed a panic: runtime error: invalid memory address or nil pointer dereference
1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration
1940057 - Openshift builds should use a wach instead of polling when checking for pod status
1940142 - 4.6->4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying
1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network
1940206 - Selector and VolumeTableRows not i18ned
1940207 - 4.7->4.6 rollbacks stuck on prometheusrules admission webhook "no route to host"
1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads)
1940318 - No data under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod'
1940322 - Split of dashbard is wrong, many Network parts
1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn't have flavors needed for compute machines
1940361 - [e2e][automation] Fix vm action tests with storageclass HPP
1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters
1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys
1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages
1940499 - hybrid-overlay not logging properly before exiting due to an error
1940518 - Components in bare metal components lack resource requests
1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header
1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned
1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info
1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list
1940876 - Components in ovirt components lack resource requests
1940889 - Installation failures in OpenStack release jobs
1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io
1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP
1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster
1940950 - vsphere: client/bootstrap CSR double create
1940972 - vsphere: [4.6] CSR approval delayed for unknown reason
1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16.
1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy
1941342 - Add kata-osbuilder-generate.service
as part of the default presets
1941456 - Multiple pods stuck in ContainerCreating status with the message "failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user" being seen in the journal log
1941526 - controller-manager-operator: Observed a panic: nil pointer dereference
1941592 - HAProxyDown not Firing
1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp
1941625 - Developer -> Topology - i18n misses
1941635 - Developer -> Monitoring - i18n misses
1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid
1941645 - Developer -> Builds - i18n misses
1941655 - Developer -> Pipelines - i18n misses
1941667 - Developer -> Project - i18n misses
1941669 - Developer -> ConfigMaps - i18n misses
1941759 - Errored pre-flight checks should not prevent install
1941798 - Some details pages don't have internationalized ResourceKind labels
1941801 - Many filter toolbar dropdowns haven't been internationalized
1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view
1941859 - [assisted operator] assisted pod deploy first time in error state
1941901 - Toleration merge logic does not account for multiple entries with the same key
1941915 - No validation against template name in boot source customization
1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description
1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8
1941990 - Pipeline metrics endpoint changed in osp-1.4
1941995 - fix backwards incompatible trigger api changes in osp1.4
1942086 - Administrator -> Home - i18n misses
1942117 - Administrator -> Workloads - i18n misses
1942125 - Administrator -> Serverless - i18n misses
1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup)
1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail
1942271 - Insights operator doesn't gather pod information from openshift-cluster-version
1942375 - CRI-O failing with error "reserving ctr name"
1942395 - The status is always "Updating" on dc detail page after deployment has failed.
1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied
1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate
1942536 - Corrupted image preventing containers from starting
1942548 - Administrator -> Networking - i18n misses
1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic
1942555 - Network policies in ovn-kubernetes don't support external traffic from router when the endpoint publishing strategy is HostNetwork
1942557 - Query is reporting "no datapoint" when label cluster="" is set but work when the label is removed or when running directly in Prometheus
1942608 - crictl cannot list the images with an error: error locating item named "manifest" for image with ID
1942614 - Administrator -> Storage - i18n misses
1942641 - Administrator -> Builds - i18n misses
1942673 - Administrator -> Pipelines - i18n misses
1942694 - Resource names with a colon do not display property in the browser window title
1942715 - Administrator -> User Management - i18n misses
1942716 - Quay Container Security operator has Medium <-> Low colors reversed
1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8]
1942736 - Administrator -> Administration - i18n misses
1942749 - Install Operator form should use info icon for popovers
1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls
1942839 - Windows VMs fail to start on air-gapped environments
1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set
1942858 - [RFE]Confusing detach volume UX
1942883 - AWS EBS CSI driver does not support partitions
1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy
1942935 - must-gather improvements
1943145 - vsphere: client/bootstrap CSR double create
1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2)
1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()
1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked
1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest
1943238 - The conditions table does not occupy 100% of the width.
1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane
1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB.
1943315 - avoid workload disruption for ICSP changes
1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes
1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest
1943356 - Dynamic plugins surfaced in the UI should be referred to as "Console plugins"
1943539 - crio-wipe is failing to start "Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container"
1943543 - DeploymentConfig Rollback doesn't reset params correctly
1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement
1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds
1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage
1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn
1943649 - don't use hello-openshift for network-check-target
1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress
1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions
1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB
1943845 - Router pods should have startup probes configured
1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors
1944160 - CNO: nbctl daemon should log reconnection info
1944180 - OVN-Kube Master does not release election lock on shutdown
1944246 - Ironic fails to inspect and move node to "manageable' but get bmh remains in "inspecting"
1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region
1944509 - Translatable texts without context in ssh expose component
1944581 - oc project not works with cluster proxy
1944587 - VPA could not take actions based on the recommendation when min-replicas=1
1944590 - The field name "VolumeSnapshotContent" is wrong on VolumeSnapshotContent detail page
1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI
1944631 - openshif authenticator should not accept non-hashed tokens
1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with ".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock"
1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures
1944674 - Project field become to "All projects" and disabled in "Review and create virtual machine" step in devconsole
1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods
1944761 - field level help instances do not use common util component Operators
1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out
1945910 - [aws] support byo iam roles for instances
1945948 - SNO: pods can't reach ingress when the ingress uses a different IPv6.
1946079 - Virtual master is not getting an IP address
1946097 - [oVirt] oVirt credentials secret contains unnecessary "ovirt_cafile"
1946119 - panic parsing install-config
1946243 - No relevant error when pg limit is reached in block pools page
1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image
1946320 - Incorrect error message in Deployment Attach Storage Page
1946449 - [e2e][automation] Fix cloud-init tests as UI changed
1946458 - Edit Application action overwrites Deployment envFrom values on save
1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI.
1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default
1946497 - local-storage-diskmaker pod logs "DeviceSymlinkExists" and "not symlinking, could not get lock: download it
link should save pod log in bootstrap.ign was not found
1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile
1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile
1948711 - thanos querier and prometheus-adapter should have 2 replicas
1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile
1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile
1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector
1948719 - Machine API components should use 1.21 dependencies
1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile
1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed
1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing
1948771 - ~50% of GCP upgrade jobs in 4.8 failing with "AggregatedAPIDown" alert on packages.coreos.com
1948782 - Stale references to the single-node-production-edge cluster profile
1948787 - secret.StringData shouldn't be used for reads
1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer
1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page
1948919 - Need minor update in message on channel modal
1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region
1948926 - Memory Usage of Dashboard 'Kubernetes / Compute Resources / Pod' contain wrong CPU query
1948936 - [e2e][automation][prow] Prow script point to deleted resource
1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer
1948953 - Uninitialized cloud provider error when provisioning a cinder volume
1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages
1948966 - Add the ability to run a gather done by IO via a Kubernetes Job
1948981 - Align dependencies and libraries with latest ironic code
1948998 - style fixes by GoLand and golangci-lint
1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way.
1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV
1949022 - Openshift 4 has a zombie problem
1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil
1949041 - vsphere: wrong image names in bundle
1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests (on OpenStack)
1949050 - Bump k8s to latest 1.21
1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv in the case of invalid NMStateConfig
1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
1949075 - Extend openshift/api for Add card customization
1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues
1949096 - Restore private git clone tests
1949099 - network-check-target code cleanup
1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol
1949145 - Move openshift-user-critical priority class to CCO
1949155 - Console doesn't correctly check for favorited or last namespace on load if project picker used
1949180 - Pipelines plugin model kinds aren't picked up by parser
1949202 - sriov-network-operator not available from operatorhub on ppc64le
1949218 - ccoctl not included in container image
1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs
1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors
1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate
1949306 - need a way to see top API accessors
1949313 - Rename vmware-vsphere- images to vsphere- images before 4.8 ships
1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring
1949347 - apiserver-watcher support for dual-stack
1949357 - manila-csi-controller pod not running due to secret lack(in another ns)
1949361 - CoreDNS resolution failure for external hostnames with "A: dns: overflow unpacking uint16"
1949364 - Mention scheduling profiles in scheduler operator repository
1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error
1949384 - Edit Default Pull Secret modal - i18n misses
1949387 - Fix the typo in auto node sizing script
1949404 - label selector on pvc creation page - i18n misses
1949410 - The referred role doesn't exist if create rolebinding from rolebinding tab of role page
1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses
1949413 - Automatic boot order setting is done incorrectly when using by-path style device names
1949418 - Controller factory workers should always restart on panic()
1949419 - oauth-apiserver logs "[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)"
1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin
1949435 - ingressclass controller doesn't recreate the openshift-default ingressclass after deleting it
1949480 - Listeners timeout are constantly being updated
1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages
1949509 - Kuryr should manage API LB instead of CNO
1949514 - URL is not visible for routes at narrow screen widths
1949554 - Metrics of vSphere CSI driver sidecars are not collected
1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error "egress bandwidth restriction -1 is not equals"
1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing
1949591 - Alert does not catch removed api usage during end-to-end tests.
1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse
1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command 'du'
1949626 - machine-api fails to create AWS client in new regions
1949661 - Kubelet Workloads Management changes for OCPNODE-529
1949664 - Spurious keepalived liveness probe failures
1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot
1949677 - multus is the first pod on a new node and the last to go ready
1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace
1949721 - Pick 99237: Use the audit ID of a request for better correlation
1949741 - Bump golang version of cluster-machine-approver
1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64
1949810 - OKD 4.7 unable to access Project Topology View
1949818 - Add e2e test to perform MCO operation Single Node OpenShift
1949820 - Unable to use oc adm top is
shortcut when asking for imagestreams
1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand
1949866 - The ccoctl fails to create authentication file when running the command ccoctl aws create-identity-provider
with --output-dir
parameter
1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn't work
1949882 - service-idler build error
1949898 - Backport RP#848 to OCP 4.8
1949907 - Gather summary of PodNetworkConnectivityChecks
1949923 - some defined rootVolumes zones not used on installation
1949928 - Samples Operator updates break CI tests
1949935 - Fix incorrect access review check on start pipeline kebab action
1949956 - kaso: add minreadyseconds to ensure we don't have an LB outage on kas
1949967 - Update Kube dependencies in MCO to 1.21
1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable
1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]
1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name
1949991 - openshift-marketplace pods are crashlooping
1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image
1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy
1950047 - CSV deployment template custom annotations are not propagated to deployments
1950112 - SNO: machine-config pool is degraded: error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791
1950113 - in-cluster operators need an API for additional AWS tags
1950133 - MCO creates empty conditions on the kubeletconfig object
1950159 - Downstream ovn-kubernetes repo should have no linter errors
1950175 - Update Jenkins and agent base image to Go 1.16
1950196 - ssh Key is added even with 'Expose SSH access to this virtual machine' unchecked
1950210 - VPA CRDs use deprecated API version
1950219 - KnativeServing is not shown in list on global config page
1950232 - [Descheduler] - The minKubeVersion should be 1.21
1950236 - Update OKD imagestreams to prefer centos7 images
1950270 - should use "kubernetes.io/os" in the dns/ingresscontroller node selector description when executing oc explain command
1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers
1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network
1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs
1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build perm failing
1950409 - Descheduler operator code and docs still reference v1beta1
1950417 - The Marketplace Operator is building with EOL k8s versions
1950430 - CVO serves metrics over HTTP, despite a lack of consumers
1950460 - RFE: Change Request Size Input to Number Spinner Input
1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap
1950532 - Include "update" when referring to operator approval and channel
1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift)
1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff
1950653 - BuildConfig ignores Args
1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node
1950908 - kube_pod_labels metric does not contain k8s labels
1950912 - [e2e][automation] add devconsole tests
1950916 - [RFE]console page show error when vm is poused
1950934 - Unnecessary rollouts can happen due to unsorted endpoints
1950935 - Updating cluster-network-operator builder & base images to be consistent with ART
1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller
1951007 - ovn master pod crashed
1951029 - Drainer panics on missing context for node patch
1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts
1951042 - Panics every few minutes in kubelet logs post-rebase
1951043 - Start Pipeline Modal Parameters should accept empty string defaults
1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests
1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud
1951084 - avoid benign "Path \"/run/secrets/etc-pki-entitlement\" from \"/etc/containers/mounts.conf\" doesn't exist, skipping" messages
1951158 - Egress Router CRD missing Addresses entry
1951169 - Improve API Explorer discoverability from the Console
1951174 - re-pin libvirt to 6.0.0
1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd's size limit
1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI
1951212 - User/Group details shows unrelated subjects in role bindings tab
1951214 - VM list page crashes when the volume type is sysprep
1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions
1951387 - opm index add doesn't respect deprecated bundles
1951412 - Configmap gatherer can fail incorrectly
1951456 - Docs and linting fixes
1951486 - Replace "kubevirt_vmi_network_traffic_bytes_total" with new metrics names
1951505 - Remove deprecated techPreviewUserWorkload field from CMO's configmap
1951558 - Backport Upstream 101093 for Startup Probe Fix
1951585 - enterprise-pod fails to build
1951636 - assisted service operator use default serviceaccount in operator bundle
1951637 - don't rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes
1951639 - Bootstrap API server unclean shutdown causes reconcile delay
1951646 - Unexpected memory climb while container not in use
1951652 - Add retries to opm index add
1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit
1951671 - Excessive writes to ironic Nodes
1951705 - kube-apiserver needs alerts on CPU utlization
1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can't find created instance
1951853 - dnses.operator.openshift.io resource's spec.nodePlacement.tolerations godoc incorrectly describes default behavior
1951858 - unexpected text '0' on filter toolbar on RoleBinding tab
1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator
1951870 - sriov network resources injector: user defined injection removed existing pod annotations
1951891 - [migration] cannot change ClusterNetwork CIDR during migration
1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost
1952001 - Delegated authentication: reduce the number of watch requests
1952032 - malformatted assets in CMO
1952045 - Mirror nfs-server image used in jenkins-e2e
1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert
1952079 - rebase openshift/sdn to kube 1.21
1952111 - Optimize importing from @patternfly/react-tokens
1952174 - DNS operator claims to be done upgrading before it even starts
1952179 - OpenStack Provider Ports UI Underscore Variables
1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID "SomeLongID" already exists, but uses a different top layer: that ID
1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods
1952214 - Console Devfile Import Dev Preview broken
1952238 - Catalog pods don't report termination logs to catalog-operator
1952262 - Need support external gateway via hybrid overlay
1952266 - etcd operator bumps status.version[name=operator] before operands update
1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots
1952282 - CSR approver races with nodelink controller and does not requeue
1952310 - VM cannot start up if the ssh key is added by another template
1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport
1952333 - openshift/kubernetes vulnerable to CVE-2021-3121
1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations
1952367 - No VM status on overview page when VM is pending
1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc
1952372 - VM stop action should not be there if the VM is not running
1952405 - console-operator is not reporting correct Available status
1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped
1952460 - In k8s 1.21 bump '[sig-network] Firewall rule control plane should not expose well-known ports' test is disabled
1952473 - Monitor pod placement during upgrades
1952487 - Template filter does not work properly
1952495 - “Create” button on the Templates page is confuse
1952527 - [Multus] multi-networkpolicy does wrong filtering
1952545 - Selection issue when inserting YAML snippets
1952585 - Operator links for 'repository' and 'container image' should be clickable in OperatorHub
1952604 - Incorrect port in external loadbalancer config
1952610 - [aws] image-registry panics when the cluster is installed in a new region
1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances
1952618 - 4.7.4->4.7.8 Upgrade Caused OpenShift-Apiserver Outage
1952625 - Fix translator-reported text issues
1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8
1952635 - Web console displays a blank page- white space instead of cluster information
1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory)
1952666 - Implement Enhancement 741 for Kubelet
1952667 - Update Readme for cluster-baremetal-operator with details about the operator
1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client
1952728 - It was not clear for users why Snapshot feature was not available
1952730 - “Customize virtual machine” and the “Advanced” feature are confusing in wizard
1952732 - Users did not understand the boot source labels
1952741 - Monitoring DB: after set Time Range as Custom time range, no data display
1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled
1952759 - [RFE]It was not immediately clear what the Star icon meant
1952795 - cloud-network-config-controller CRD does not specify correct plural name
1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows
1952820 - [LSO] Delete localvolume pv is failed
1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud
1952891 - Upgrade failed due to cinder csi driver not deployed
1952904 - Linting issues in gather/clusterconfig package
1952906 - Unit tests for configobserver.go
1952931 - CI does not check leftover PVs
1952958 - Runtime error loading console in Safari 13
1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool
1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform
1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU
1953077 - Handling GCP's: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource
1953102 - kubelet CPU use during an e2e run increased 25% after rebase
1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9
1953169 - endpoint slice controller doesn't handle services target port correctly
1953257 - Multiple EgressIPs per node for one namespace when "oc get hostsubnet"
1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it
1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly
1953418 - [e2e][automation] Fix vm wizard validate tests
1953518 - thanos-ruler pods failed to start up for "cannot unmarshal DNS message"
1953530 - Fix openshift/sdn unit test flake
1953539 - kube-storage-version-migrator: priorityClassName not set
1953543 - (release-4.8) Add missing sample archive data
1953551 - build failure: unexpected trampoline for shared or dynamic linking
1953555 - GlusterFS tests fail on ipv6 clusters
1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology
1953670 - ironic container image build failing because esp partition size is too small
1953680 - ipBlock ignoring all other cidr's apart from the last one specified
1953691 - Remove unused mock
1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console
1953726 - Fix issues related to loading dynamic plugins
1953729 - e2e unidling test is flaking heavily on SNO jobs
1953795 - Ironic can't virtual media attach ISOs sourced from ingress routes
1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS
1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster
1953810 - Allow use of storage policy in VMC environments
1953830 - The oc-compliance build does not available for OCP4.8
1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation
1953977 - [4.8] packageserver pods restart many times on the SNO cluster
1953979 - Ironic caching virtualmedia images results in disk space limitations
1954003 - Alerts shouldn't report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown
1954025 - Disk errors while scaling up a node with multipathing enabled
1954087 - Unit tests for kube-scheduler-operator
1954095 - Apply user defined tags in AWS Internal Registry
1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns
1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots
1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js
1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22
1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22
1954248 - Disable Alertmanager Protractor e2e tests
1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container
1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: "" does not work on a upgraded cluster
1954421 - Get 'Application is not available' when access Prometheus UI
1954459 - Error: Gateway Time-out display on Alerting console
1954460 - UI, The status of "Used Capacity Breakdown [Pods]" is "Not available"
1954509 - FC volume is marked as unmounted after failed reconstruction
1954540 - Lack translation for local language on pages under storage menu
1954544 - authn operator: endpoints controller should use the context it creates
1954554 - Add e2e tests for auto node sizing
1954566 - Cannot update a component (UtilizationCard
) error when switching perspectives manually
1954597 - Default image for GCP does not support ignition V3
1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator
1954634 - apirequestcounts does not honor max users
1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0
1954640 - Support of gatherers with different periods
1954671 - disable volume expansion support in vsphere csi driver storage class
1954687 - localvolumediscovery and localvolumset e2es are disabled
1954688 - LSO has missing examples for localvolumesets
1954696 - [API-1009] apirequestcounts should indicate useragent
1954715 - Imagestream imports become very slow when doing many in parallel
1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace
1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert
1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won't access APIs that trigger APIRemovedInNextReleaseInUse alert
1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure
1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert
1954783 - [aws] support byo private hosted zone
1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage
1954830 - verify-client-go job is failing for release-4.7 branch
1954865 - Add necessary priority class to pod-identity-webhook deployment
1954866 - Add necessary priority class to downloads
1954870 - Add necessary priority class to network components
1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by openstack.
1954891 - Add necessary priority class to pruner
1954892 - Add necessary priority class to ingress-canary
1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources
1954937 - [API-1009] oc get apirequestcount
shows blank for column REQUESTSINCURRENTHOUR
1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services
1954972 - TechPreviewNoUpgrade featureset can be undone
1954973 - "read /proc/pressure/cpu: operation not supported" in node-exporter logs
1954994 - should update to 2.26.0 for prometheus resources label
1955051 - metrics "kube_node_status_capacity_cpu_cores" does not exist
1955089 - Support [sig-cli] oc observe works as expected test for IPv6
1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display
1955102 - Add vsphere_node_hw_version_total metric to the collected metrics
1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM
1955196 - linuxptp-daemon crash on 4.8
1955226 - operator updates apirequestcount CRD over and over
1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing
1955256 - stop collecting API that no longer exists
1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts
1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains "google"
1955414 - 4.8 -> 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator
1955445 - Drop crio image metrics with high cardinality
1955457 - Drop container_memory_failures_total metric because of high cardinality
1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter
1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0
1955478 - Drop high-cardinality metrics from kube-state-metrics which aren't used
1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation
1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range
1955554 - MAO does not react to events triggered from Validating Webhook Configurations
1955589 - thanos-querier should have a PodDisruptionBudget in HA topology
1955595 - Add DevPreviewLongLifecycle Descheduler profile
1955596 - Pods stuck in creation phase on realtime kernel SNO
1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing
1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status ['installing', 'error']
1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta
1955749 - OCP branded templates need to be translated
1955761 - packageserver clusteroperator does not set reason or message for Available condition
1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces
1955803 - OperatorHub - console accepts any value for "Infrastructure features" annotation
1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables
1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable
1955862 - Local Storage Operator using LocalVolume CR fails to create PV's when backend storage failure is simulated
1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct
1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-> Removed-> Managed in configs.imageregistry with high ratio
1955969 - Workers cannot be deployed attached to multiple networks.
1956079 - Installer gather doesn't collect any networking information
1956208 - Installer should validate root volume type
1956220 - Set htt proxy system properties as expected by kubernetes-client
1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet
1956334 - Event Listener Details page does not show Triggers section
1956353 - test: analyze job consistently fails
1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate
1956405 - Bump k8s dependencies in cluster resource override admission operator
1956411 - Apply custom tags to AWS EBS volumes
1956480 - [4.8] Bootimage bump tracker
1956606 - probes FlowSchema manifest not included in any cluster profile
1956607 - Multiple manifests lack cluster profile annotations
1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup
1956610 - manage-helm-repos manifest lacks cluster profile annotations
1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string
1956650 - The container disk URL is empty for Windows guest tools
1956768 - aws-ebs-csi-driver-controller-metrics TargetDown
1956826 - buildArgs does not work when the value is taken from a secret
1956895 - Fix chatty kubelet log message
1956898 - fix log files being overwritten on container state loss
1956920 - can't open terminal for pods that have more than one container running
1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false
1956978 - Installer gather doesn't include pod names in filename
1957039 - Physical VIP for pod -> Svc -> Host is incorrectly set to an IP of 169.254.169.2 for Local GW
1957041 - Update CI e2echart with more node info
1957127 - Delegated authentication: reduce the number of watch requests
1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the "tests" image
1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes
1957149 - CI: "Managed cluster should start all core operators" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): missing dynamicClient
1957179 - Incorrect VERSION in node_exporter
1957190 - CI jobs failing due too many watch requests (prometheus-operator)
1957198 - Misspelled console-operator condition
1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap
1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2
1957261 - update godoc for new build status image change trigger fields
1957295 - Apply priority classes conventions as test to openshift/origin repo
1957315 - kuryr-controller doesn't indicate being out of quota
1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly
1957374 - mcddrainerr doesn't list specific pod
1957386 - Config serve and validate command should be under alpha
1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions
1957502 - Infrequent panic in kube-apiserver in aws-serial job
1957561 - lack of pseudolocalization for some text on Cluster Setting page
1957584 - Routes are not getting created when using hostname without FQDN standard
1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone
1957645 - Event "Updated PrometheusRule.monitoring.coreos.com/v1 because it changed" is frequently looped with weird empty {} changes
1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP's
1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out
1957748 - Ptp operator pod should have CPU and memory requests set but not limits
1957756 - Device Replacemet UI, The status of the disk is "replacement ready" before I clicked on "start replacement"
1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1957775 - CVO creating cloud-controller-manager too early causing upgrade failures
1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error
1957822 - Update apiserver tlsSecurityProfile description to include Custom profile
1957832 - CMO end-to-end tests work only on AWS
1957856 - 'resource name may not be empty' is shown in CI testing
1957869 - baremetal IPI power_interface for irmc is inconsistent
1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects
1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer
1957893 - ClusterDeployment / Agent conditions show "ClusterAlreadyInstalling" during each spoke install
1957895 - Cypress helper projectDropdown.shouldContain is not an assertion
1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator's version reads
1957926 - "Add Capacity" should allow to add n3 (or n4) local devices at once
1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state
1957967 - Possible test flake in listPage Cypress view
1957972 - Leftover templates from mdns
1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7
1957982 - Deployment Actions clickable for view-only projects
1957991 - ClusterOperatorDegraded can fire during installation
1958015 - "config-reloader-cpu" and "config-reloader-memory" flags have been deprecated for prometheus-operator
1958080 - Missing i18n for login, error and selectprovider pages
1958094 - Audit log files are corrupted sometimes
1958097 - don't show "old, insecure token format" if the token does not actually exist
1958114 - Ignore staged vendor files in pre-commit script
1958126 - [OVN]Egressip doesn't take effect
1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs
1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names
1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs
1958285 - Deployment considered unhealthy despite being available and at latest generation
1958296 - OLM must explicitly alert on deprecated APIs in use
1958329 - pick 97428: add more context to log after a request times out
1958367 - Build metrics do not aggregate totals by build strategy
1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton
1958405 - etcd: current health checks and reporting are not adequate to ensure availability
1958406 - Twistlock flags mode of /var/run/crio/crio.sock
1958420 - openshift-install 4.7.10 fails with segmentation error
1958424 - aws: support more auth options in manual mode
1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View
1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse
1958643 - All pods creation stuck due to SR-IOV webhook timeout
1958679 - Compression on pool can't be disabled via UI
1958753 - VMI nic tab is not loadable
1958759 - Pulling Insights report is missing retry logic
1958811 - VM creation fails on API version mismatch
1958812 - Cluster upgrade halts as machine-config-daemon fails to parse rpm-ostree status
during cluster upgrades
1958861 - [CCO] pod-identity-webhook certificate request failed
1958868 - ssh copy is missing when vm is running
1958884 - Confusing error message when volume AZ not found
1958913 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff
1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs
1958958 - [SCALE] segfault with ovnkube adding to address set
1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes
1959041 - LSO Cluster UI,"Troubleshoot" link does not exist after scale down osd pod
1959058 - ovn-kubernetes has lock contention on the LSP cache
1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change
1959177 - Descheduler dev manifests are missing permissions
1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload
1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates
1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring
1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check
1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system
1959406 - Difficult to debug performance on ovn-k without pprof enabled
1959471 - Kube sysctl conformance tests are disabled, meaning we can't submit conformance results
1959479 - machines doesn't support dual-stack loadbalancers on Azure
1959513 - Cluster-kube-apiserver does not use library-go for audit pkg
1959519 - Operand details page only renders one status donut no matter how many 'podStatuses' descriptors are used
1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console
1959564 - Test verify /run filesystem contents failing
1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot
1959650 - Gather SDI-related MachineConfigs
1959658 - showing a lot "constructing many client instances from the same exec auth config"
1959696 - Deprecate 'ConsoleConfigRoute' struct in console-operator config
1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO
1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode
1959711 - Egressnetworkpolicy doesn't work when configure the EgressIP
1959786 - [dualstack]EgressIP doesn't work on dualstack cluster for IPv6
1959916 - Console not works well against a proxy in front of openshift clusters
1959920 - UEFISecureBoot set not on the right master node
1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: []
1960035 - iptables is missing from ose-keepalived-ipfailover image
1960059 - Remove "Grafana UI" link from Console Monitoring > Dashboards page
1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions
1960129 - [e2e][automation] add smoke tests about VM pages and actions
1960134 - some origin images are not public
1960171 - Enable SNO checks for image-registry
1960176 - CCO should recreate a user for the component when it was removed from the cloud providers
1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled
1960255 - fixed obfuscation permissions
1960257 - breaking changes in pr template
1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost
1960323 - Address issues raised by coverity security scan
1960324 - manifests: extra "spec.version" in console quickstarts makes CVO hotloop
1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop
1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop
1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop
1960339 - manifests: unset "preemptionPolicy" makes CVO hotloop
1960531 - Items under 'Current Bandwidth' for Dashboard 'Kubernetes / Networking / Pod' keep added for every access
1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana
1960546 - Add virt_platform metric to the collected metrics
1960554 - Remove rbacv1beta1 handling code
1960612 - Node disk info in overview/details does not account for second drive where /var is located
1960619 - Image registry integration tests use old-style OAuth tokens
1960683 - GlobalConfigPage is constantly requesting resources
1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces
1960716 - Missing details for debugging
1960732 - Outdated manifests directory in CSI driver operator repositories
1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master
1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be "the newest"
1960767 - /metrics endpoint of the Grafana UI is accessible without authentication
1960780 - CI: failed to create PDB "service-test" the server could not find the requested resource
1961064 - Documentation link to network policies is outdated
1961067 - Improve log gathering logic
1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs
1961091 - Gather MachineHealthCheck definitions
1961120 - CSI driver operators fail when upgrading a cluster
1961173 - recreate existing static pod manifests instead of updating
1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing
1961314 - Race condition in operator-registry pull retry unit tests
1961320 - CatalogSource does not emit any metrics to indicate if it's ready or not
1961336 - Devfile sample for BuildConfig is not defined
1961356 - Update single quotes to double quotes in string
1961363 - Minor string update for " No Storage classes found in cluster, adding source is disabled."
1961393 - DetailsPage does not work with group~version~kind
1961452 - Remove "Alertmanager UI" link from Console Monitoring > Alerting page
1961466 - Some dropdown placeholder text on route creation page is not translated
1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true
1961506 - NodePorts do not work on RHEL 7.9 workers (was "4.7 -> 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers")
1961536 - clusterdeployment without pull secret is crashing assisted service pod
1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop
1961545 - Fixing Documentation Generation
1961550 - HAproxy pod logs showing error "another server named 'pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080' was already defined at line 326, please use distinct names"
1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig
1961561 - The encryption controllers send lots of request to an API server
1961582 - Build failure on s390x
1961644 - NodeAuthenticator tests are failing in IPv6
1961656 - driver-toolkit missing some release metadata
1961675 - Kebab menu of taskrun contains Edit options which should not be present
1961701 - Enhance gathering of events
1961717 - Update runtime dependencies to Wallaby builds for bugfixes
1961829 - Quick starts prereqs not shown when description is long
1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy
1961878 - Add Sprint 199 translations
1961897 - Remove history listener before console UI is unmounted
1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes
1962062 - Monitoring dashboards should support default values of "All"
1962074 - SNO:the pod get stuck in CreateContainerError and prompt "failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable" after adding a performanceprofile
1962095 - Replace gather-job image without FQDN
1962153 - VolumeSnapshot routes are ambiguous, too generic
1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime
1962219 - NTO relies on unreliable leader-for-life implementation.
1962256 - use RHEL8 as the vm-example
1962261 - Monitoring components requesting more memory than they use
1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster
1962347 - Cluster does not exist logs after successful installation
1962392 - After upgrade from 4.5.16 to 4.6.17, customer's application is seeing re-transmits
1962415 - duplicate zone information for in-tree PV after enabling migration
1962429 - Cannot create windows vm because kubemacpool.io denied the request
1962525 - [Migration] SDN migration stuck on MCO on RHV cluster
1962569 - NetworkPolicy details page should also show Egress rules
1962592 - Worker nodes restarting during OS installation
1962602 - Cloud credential operator scrolls info "unable to provide upcoming..." on unsupported platform
1962630 - NTO: Ship the current upstream TuneD
1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root
1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace
1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint
1962740 - Add documentation to Egress Router
1962850 - [4.8] Bootimage bump tracker
1962882 - Version pod does not set priorityClassName
1962905 - Ramdisk ISO source defaulting to "http" breaks deployment on a good amount of BMCs
1963068 - ironic container should not specify the entrypoint
1963079 - KCM/KS: ability to enforce localhost communication with the API server.
1963154 - Current BMAC reconcile flow skips Ironic's deprovision step
1963159 - Add Sprint 200 translations
1963204 - Update to 8.4 IPA images
1963205 - Installer is using old redirector
1963208 - Translation typos/inconsistencies for Sprint 200 files
1963209 - Some strings in public.json have errors
1963211 - Fix grammar issue in kubevirt-plugin.json string
1963213 - Memsource download script running into API error
1963219 - ImageStreamTags not internationalized
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
1963267 - Warning: Invalid DOM property classname
. Did you mean className
? console warnings in volumes table
1963502 - create template from is not descriptive
1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too
1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault
1963848 - Use OS-shipped stalld vs. the NTO-shipped one.
1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies
1963871 - cluster-etcd-operator:[build] upgrade to go 1.16
1963896 - The VM disks table does not show easy links to PVCs
1963912 - "[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}" failures on vsphere
1963932 - Installation failures in bootstrap in OpenStack release jobs
1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail
1964059 - rebase openshift/sdn to kube 1.21.1
1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration
1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to "Unknown provider baremetal"
1964243 - The oc compliance fetch-raw
doesn’t work for disconnected cluster
1964270 - Failed to install 'cluster-kube-descheduler-operator' with error: "clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\": must be no more than 63 characters"
1964319 - Network policy "deny all" interpreted as "allow all" in description page
1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured
1964472 - Make project and namespace requirements more visible rather than giving me an error after submission
1964486 - Bulk adding of CIDR IPS to whitelist is not working
1964492 - Pick 102171: Implement support for watch initialization in P&F
1964625 - NETID duplicate check is only required in NetworkPolicy Mode
1964748 - Sync upstream 1.7.2 downstream
1964756 - PVC status is always in 'Bound' status when it is actually cloning
1964847 - Sanity check test suite missing from the repo
1964888 - opoenshift-apiserver imagestreamimports depend on >34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = "transport is closing"
1964936 - error log for "oc adm catalog mirror" is not correct
1964979 - Add mapping from ACI to infraenv to handle creation order issues
1964997 - Helm Library charts are showing and can be installed from Catalog
1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots
1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation
1965283 - 4.7->4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData:
1965330 - oc image extract fails due to security capabilities on files
1965334 - opm index add fails during image extraction
1965367 - Typo in in etcd-metric-serving-ca resource name
1965370 - "Route" is not translated in Korean or Chinese
1965391 - When storage class is already present wizard do not jumps to "Stoarge and nodes"
1965422 - runc is missing Provides oci-runtime in rpm spec
1965522 - [v2v] Multiple typos on VM Import screen
1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists
1965909 - Replace "Enable Taint Nodes" by "Mark nodes as dedicated"
1965921 - [oVirt] High performance VMs shouldn't be created with Existing policy
1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request
1966077 - hidden
descriptor is visible in the Operator instance details page1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11
1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality
1966138 - (release-4.8) Update K8s & OpenShift API versions
1966156 - Issue with Internal Registry CA on the service pod
1966174 - No storage class is installed, OCS and CNV installations fail
1966268 - Workaround for Network Manager not supporting nmconnections priority
1966401 - Revamp Ceph Table in Install Wizard flow
1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert
1966416 - (release-4.8) Do not exceed the data size limit
1966459 - 'policy/v1beta1 PodDisruptionBudget' and 'batch/v1beta1 CronJob' appear in image-registry-operator log
1966487 - IP address in Pods list table are showing node IP other than pod IP
1966520 - Add button from ocs add capacity should not be enabled if there are no PV's
1966523 - (release-4.8) Gather MachineAutoScaler definitions
1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed
1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug
1966602 - don't require manually setting IPv6DualStack feature gate in 4.8
1966620 - The bundle.Dockerfile in the repo is obsolete
1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install
1966654 - Alertmanager PDB is not created, but Prometheus UWM is
1966672 - Add Sprint 201 translations
1966675 - Admin console string updates
1966677 - Change comma to semicolon
1966683 - Translation bugs from Sprint 201 files
1966684 - Verify "Creating snapshot for claim <1>{pvcName}</1>" displays correctly
1966697 - Garbage collector logs every interval - move to debug level
1966717 - include full timestamps in the logs
1966759 - Enable downstream plugin for Operator SDK
1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version
1966813 - "Replacing an unhealthy etcd member whose node is not ready" procedure results in new etcd pod in CrashLoopBackOff
1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1
1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkub[e"
1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings "ipv6.dhcp-duid=ll" missing from dual stack install
1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image
1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored
1967197 - 404 errors loading some i18n namespaces
1967207 - Getting started card: console customization resources link shows other resources
1967208 - Getting started card should use semver library for parsing the version instead of string manipulation
1967234 - Console is continuously polling for ConsoleLink acm-link
1967275 - Awkward wrapping in getting started dashboard card
1967276 - Help menu tooltip overlays dropdown
1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check
1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit
1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion
1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests
1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small
1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion
1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit
1967595 - Fixes the remaining lint issues
1967614 - prometheus-k8s pods can't be scheduled due to volume node affinity conflict
1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn't work if ovirt-config.yaml doesn't exist and user should fill the FQDN URL
1967625 - Add OpenShift Dockerfile for cloud-provider-aws
1967631 - [4.8.0] Cluster install failed due to timeout while "Waiting for control plane"
1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into "Writing image to disk" from "Waiting for bootkube"
1967639 - Console whitescreens if user preferences fail to load
1967662 - machine-api-operator should not use deprecated "platform" field in infrastructures.config.openshift.io
1967667 - Add Sprint 202 Round 1 translations
1967713 - Insights widget shows invalid link to the OCM
1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming
1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than "NoExecute"
1967803 - should update to 7.5.5 for grafana resources version label
1967832 - Add more tests for periodic.go
1967833 - Add tasks pool to tasks_processing
1967842 - Production logs are spammed on "OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS"
1967843 - Fix null reference to messagesToSearch in gather_logs.go
1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring
1967933 - Network-Tools debug scripts not working as expected
1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: "mkdir: cannot create directory '/var/lib/pgsql/data/userdata': Permission denied"
1968019 - drain timeout and pool degrading period is too short
1968067 - [master] Agent validation not including reason for being insufficient
1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed
1968175 - [4.8.0] Agent validation not including reason for being insufficient
1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration
1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn't be required
1968435 - [4.8.0] Unclear message in case of missing clusterImageSet
1968436 - Listeners timeout updated to remain using default value
1968449 - [4.8.0] Wrong Install-config override documentation
1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters
1968452 - [4.8.0] [doc] "Mirror Registry Configuration" doc section needs clarification of functionality and limitations
1968454 - [4.8.0] backend events generated with wrong namespace for agent
1968455 - [4.8.0] Assisted Service operator's controllers are starting before the base service is ready
1968515 - oc should set user-agent when talking with registry
1968531 - Sync upstream 1.8.0 downstream
1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn't clean up properly
1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted
1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox
1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil
1968701 - Bare metal IPI installation is failed due to worker inspection failure
1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed
1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning
1969284 - Console Query Browser: Can't reset zoom to fixed time range after dragging to zoom
1969315 - [4.8.0] BMAC doesn't check if ISO Url changed before queuing BMH for reconcile
1969352 - [4.8.0] Creating BareMetalHost without the "inspect.metal3.io" does not automatically add it
1969363 - [4.8.0] Infra env should show the time that ISO was generated.
1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it
1969386 - Filesystem's Utilization doesn't show in VM overview tab
1969397 - OVN bug causing subports to stay DOWN fails installations
1969470 - [4.8.0] Misleading error in case of install-config override bad input
1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step
1969525 - Replace golint with revive
1969535 - Topology edit icon does not link correctly when branch name contains slash
1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it
1969551 - [4.8.0] Assisted service times out on GetNextSteps due to
oc adm release infotaking too long
1969561 - Test "an end user can use OLM can subscribe to the operator" generates deprecation alert
1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire
1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io
1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1
1969626 - Portfoward stream cleanup can cause kubelet to panic
1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out
1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check
1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso
1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups
1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml
1969784 - WebTerminal widget should send resize events
1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails
1969891 - Fix rotated pipelinerun status icon issue in safari
1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse
1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available
1969951 - Cluster local doesn't work for knative services created from dev console
1969969 - ironic-rhcos-downloader container uses and old base image
1970062 - ccoctl does not work with STS authentication
1970068 - ovnkube-master logs "Failed to find node ips for gateway" error
1970126 - [4.8.0] Disable "metrics-events" when deploying using the operator
1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change
1970262 - [4.8.0] Remove Agent CRD Status fields not needed
1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs
1970269 - [4.8.0] missing role in agent CRD
1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs
1970381 - Monitoring dashboards: Custom time range inputs should retain their values
1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed
1970401 - [4.8.0] AgentLabelSelector is required yet not supported
1970415 - SR-IOV Docs needs documentation for disabling port security on a network
1970470 - Add pipeline annotation to Secrets which are created for a private repo
1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod
1970624 - 4.7->4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io
1970828 - "500 Internal Error" for all openshift-monitoring routes
1970975 - 4.7 -> 4.8 upgrades on AWS take longer than expected
1971068 - Removing invalid AWS instances from the CF templates
1971080 - 4.7->4.8 CI: KubePodNotReady due to MCD's 5m sleep between drain attempts
1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 !
1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces
1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing "Validated" condition about VIP not matching machine network
1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn't work - clusteroperator/kube-apiserver is not upgradeable
1971589 - [4.8.0] Telemetry-client won't report metrics in case the cluster was installed using the assisted operator
1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service
1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery
1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409)
1971739 - Keep /boot RW when kdump is enabled
1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly
1972128 - ironic-static-ip-manager container still uses 4.7 base image
1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are
1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster
1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted
1972262 - [4.8.0] "baremetalhost.metal3.io/detached" uses boolean value where string is expected
1972426 - Adopt failure can trigger deprovisioning
1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage
1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration
1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall
1972565 - performance issues due to lost node, pods taking too long to relaunch
1972662 - DPDK KNI modules need some additional tools
1972676 - Requirements for authenticating kernel modules with X.509
1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings
1972690 - [4.8.0] infra-env condition message isn't informative in case of missing pull secret
1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration
1972768 - kube-apiserver setup fail while installing SNO due to port being used
1972864 - New
local-with-fallback` service annotation does not preserve source IP
1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8
1973117 - No storage class is installed, OCS and CNV installations fail
1973233 - remove kubevirt images and references
1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld.
1973428 - Placeholder bug for OCP 4.8.0 image release
1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped
1973672 - fix ovn-kubernetes NetworkPolicy 4.7->4.8 upgrade issue
1973995 - [Feature:IPv6DualStack] tests are failing in dualstack
1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings
1974447 - Requirements for nvidia GPU driver container for driver toolkit
1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events.
1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel
1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion
1974746 - [4.8.0] File system usage not being logged appropriately
1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay.
1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster
1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string
1974850 - [4.8] coreos-installer failing Execshield
1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift
1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing
1975155 - Kubernetes service IP cannot be accessed for rhel worker
1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types
1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData
1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified
1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve
1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn
1975672 - [4.8.0] Production logs are spammed on "Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient"
1975789 - worker nodes rebooted when we simulate a case where the api-server is down
1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s]
1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn't work - ingresscontroller "default" is degraded
1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted
1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]
1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts
1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO
1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1
1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO
1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller
1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes
1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses
1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8
1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod
1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used
1980788 - NTO-shipped stalld can segfault
1981633 - enhance service-ca injection
1982250 - Performance Addon Operator fails to install after catalog source becomes ready
1982252 - olm Operator is in CrashLoopBackOff state with error "couldn't cleanup cross-namespace ownerreferences"
- References:
https://access.redhat.com/security/cve/CVE-2016-2183 https://access.redhat.com/security/cve/CVE-2020-7774 https://access.redhat.com/security/cve/CVE-2020-15106 https://access.redhat.com/security/cve/CVE-2020-15112 https://access.redhat.com/security/cve/CVE-2020-15113 https://access.redhat.com/security/cve/CVE-2020-15114 https://access.redhat.com/security/cve/CVE-2020-15136 https://access.redhat.com/security/cve/CVE-2020-26160 https://access.redhat.com/security/cve/CVE-2020-26541 https://access.redhat.com/security/cve/CVE-2020-28469 https://access.redhat.com/security/cve/CVE-2020-28500 https://access.redhat.com/security/cve/CVE-2020-28852 https://access.redhat.com/security/cve/CVE-2021-3114 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3636 https://access.redhat.com/security/cve/CVE-2021-20206 https://access.redhat.com/security/cve/CVE-2021-20271 https://access.redhat.com/security/cve/CVE-2021-20291 https://access.redhat.com/security/cve/CVE-2021-21419 https://access.redhat.com/security/cve/CVE-2021-21623 https://access.redhat.com/security/cve/CVE-2021-21639 https://access.redhat.com/security/cve/CVE-2021-21640 https://access.redhat.com/security/cve/CVE-2021-21648 https://access.redhat.com/security/cve/CVE-2021-22133 https://access.redhat.com/security/cve/CVE-2021-23337 https://access.redhat.com/security/cve/CVE-2021-23362 https://access.redhat.com/security/cve/CVE-2021-23368 https://access.redhat.com/security/cve/CVE-2021-23382 https://access.redhat.com/security/cve/CVE-2021-25735 https://access.redhat.com/security/cve/CVE-2021-25737 https://access.redhat.com/security/cve/CVE-2021-26539 https://access.redhat.com/security/cve/CVE-2021-26540 https://access.redhat.com/security/cve/CVE-2021-27292 https://access.redhat.com/security/cve/CVE-2021-28092 https://access.redhat.com/security/cve/CVE-2021-29059 https://access.redhat.com/security/cve/CVE-2021-29622 https://access.redhat.com/security/cve/CVE-2021-32399 https://access.redhat.com/security/cve/CVE-2021-33034 https://access.redhat.com/security/cve/CVE-2021-33194 https://access.redhat.com/security/cve/CVE-2021-33909 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ d4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26 Z7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs ImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h gep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8 TWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX X//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb 1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA mtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/ 0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6 dpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx +bLvJlpGiMg= =NJcM -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
- VDSM manages and monitors the host's storage, memory and networks as well as virtual machine creation, other host administration tasks, statistics gathering, and log collection.
Bug Fix(es):
-
An update in libvirt has changed the way block threshold events are submitted. As a result, the VDSM was confused by the libvirt event, and tried to look up a drive, logging a warning about a missing drive. In this release, the VDSM has been adapted to handle the new libvirt behavior, and does not log warnings about missing drives. (BZ#1948177)
-
Previously, when a virtual machine was powered off on the source host of a live migration and the migration finished successfully at the same time, the two events interfered with each other, and sometimes prevented migration cleanup resulting in additional migrations from the host being blocked. In this release, additional migrations are not blocked. (BZ#1959436)
-
Previously, when failing to execute a snapshot and re-executing it later, the second try would fail due to using the previous execution data. In this release, this data will be used only when needed, in recovery mode. (BZ#1984209)
-
Then engine deletes the volume and causes data corruption. 1998017 - Keep cinbderlib dependencies optional for 4.4.8
Bug Fix(es):
-
Documentation is referencing deprecated API for Service Export - Submariner (BZ#1936528)
-
Importing of cluster fails due to error/typo in generated command (BZ#1936642)
-
RHACM 2.2.2 images (BZ#1938215)
-
2.2 clusterlifecycle fails to allow provision
fips: true
clusters on aws, vsphere (BZ#1941778) -
Summary:
The Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1492", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "primavera unifier", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "17.7" }, { "model": "financial services crime and compliance management studio", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.0.8.3.0" }, { "model": "jd edwards enterpriseone tools", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "9.2.6.1" }, { "model": "health sciences data management workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.0.0.0" }, { "model": "primavera gateway", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "19.12.11" }, { "model": "banking trade finance process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "primavera gateway", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "17.12.0" }, { "model": "primavera gateway", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "20.12.0" }, { "model": "primavera unifier", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "17.12" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "banking trade finance process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "banking supply chain finance", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "communications cloud native core policy", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "1.11.0" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.12" }, { "model": "lodash", "scope": "lt", "trust": 1.0, "vendor": "lodash", "version": "4.17.21" }, { "model": "primavera gateway", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "20.12.7" }, { "model": "banking corporate lending process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "banking supply chain finance", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "primavera gateway", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "18.8.0" }, { "model": "banking trade finance process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "banking extensibility workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.12" }, { "model": "health sciences data management workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "2.5.2.1" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "banking corporate lending process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "enterprise communications broker", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.2.0" }, { "model": "banking credit facilities process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.3.0" }, { "model": "banking extensibility workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "primavera gateway", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "19.12.0" }, { "model": "banking supply chain finance", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "banking credit facilities process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.5.0" }, { "model": "primavera gateway", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "18.8.12" }, { "model": "banking corporate lending process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "financial services crime and compliance management studio", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.0.8.2.0" }, { "model": "retail customer management and segmentation foundation", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.0" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "9.0" }, { "model": "banking extensibility workbench", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "primavera unifier", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "18.8" }, { "model": "communications session border controller", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.4" }, { "model": "enterprise communications broker", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "3.3.0" }, { "model": "primavera gateway", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "17.12.11" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "banking credit facilities process management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "14.2.0" }, { "model": "communications design studio", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "7.4.2" }, { "model": "communications services gatekeeper", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "7.0" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.58" }, { "model": "lodash", "scope": "eq", "trust": 0.8, "vendor": "lodash", "version": "4.17.21" }, { "model": "lodash", "scope": "eq", "trust": 0.8, "vendor": "lodash", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "db": "NVD", "id": "CVE-2020-28500" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:lodash:lodash:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "4.17.21", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:18.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "17.12", "versionStartIncluding": "17.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:19.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:retail_customer_management_and_segmentation_foundation:19.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_services_gatekeeper:7.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_unifier:20.12:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "17.12.11", "versionStartIncluding": "17.12.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:8.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_session_border_controller:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "20.12.7", "versionStartIncluding": "20.12.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "19.12.11", "versionStartIncluding": "19.12.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:primavera_gateway:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "18.8.12", "versionStartIncluding": "18.8.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_credit_facilities_process_management:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_corporate_lending_process_management:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_supply_chain_finance:14.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_trade_finance_process_management:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_design_studio:7.4.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:banking_extensibility_workbench:14.5.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_communications_broker:3.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:1.11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.2.6.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_data_management_workbench:2.5.2.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_data_management_workbench:3.0.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:financial_services_crime_and_compliance_management_studio:8.0.8.3.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:financial_services_crime_and_compliance_management_studio:8.0.8.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-28500" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "CNNVD", "id": "CNNVD-202102-1168" } ], "trust": 1.3 }, "cve": "CVE-2020-28500", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 5.0, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-28500", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "id": "VHN-373964", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:L/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "LOW", "baseScore": 5.3, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 1.4, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 2.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "Low", "baseScore": 5.3, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-28500", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-28500", "trust": 1.8, "value": "MEDIUM" }, { "author": "report@snyk.io", "id": "CVE-2020-28500", "trust": 1.0, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202102-1168", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-373964", "trust": 0.1, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2020-28500", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-373964" }, { "db": "VULMON", "id": "CVE-2020-28500" }, { "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "db": "NVD", "id": "CVE-2020-28500" }, { "db": "NVD", "id": "CVE-2020-28500" }, { "db": "CNNVD", "id": "CNNVD-202102-1168" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Lodash Exists in unspecified vulnerabilities.Service operation interruption (DoS) It may be in a state. lodash is an open source JavaScript utility library. There is a security vulnerability in Lodash. Please keep an eye on CNNVD or manufacturer announcements. Description:\n\nThe ovirt-engine package provides the manager for virtualization\nenvironments. \nThis manager enables admins to define hosts and networks, as well as to add\nstorage, create VMs and manage user permissions. \n\nBug Fix(es):\n\n* This release adds the queue attribute to the virtio-scsi driver in the\nvirtual machine configuration. This improvement enables multi-queue\nperformance with the virtio-scsi driver. (BZ#911394)\n\n* With this release, source-load-balancing has been added as a new\nsub-option for xmit_hash_policy. It can be configured for bond modes\nbalance-xor (2), 802.3ad (4) and balance-tlb (5), by specifying\nxmit_hash_policy=vlan+srcmac. (BZ#1683987)\n\n* The default DataCenter/Cluster will be set to compatibility level 4.6 on\nnew installations of Red Hat Virtualization 4.4.6.; (BZ#1950348)\n\n* With this release, support has been added for copying disks between\nregular Storage Domains and Managed Block Storage Domains. \nIt is now possible to migrate disks between Managed Block Storage Domains\nand regular Storage Domains. (BZ#1906074)\n\n* Previously, the engine-config value LiveSnapshotPerformFreezeInEngine was\nset by default to false and was supposed to be uses in cluster\ncompatibility levels below 4.4. The value was set to general version. \nWith this release, each cluster level has it\u0027s own value, defaulting to\nfalse for 4.4 and above. This will reduce unnecessary overhead in removing\ntime outs of the file system freeze command. (BZ#1932284)\n\n* With this release, running virtual machines is supported for up to 16TB\nof RAM on x86_64 architectures. (BZ#1944723)\n\n* This release adds the gathering of oVirt/RHV related certificates to\nallow easier debugging of issues for faster customer help and issue\nresolution. \nInformation from certificates is now included as part of the sosreport. \nNote that no corresponding private key information is gathered, due to\nsecurity considerations. (BZ#1845877)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/2974891\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1113630 - [RFE] indicate vNICs that are out-of-sync from their configuration on engine\n1310330 - [RFE] Provide a way to remove stale LUNs from hypervisors\n1589763 - [downstream clone] Error changing CD for a running VM when ISO image is on a block domain\n1621421 - [RFE] indicate vNIC is out of sync on network QoS modification on engine\n1717411 - improve engine logging when migration fail\n1766414 - [downstream] [UI] hint after updating mtu on networks connected to running VMs\n1775145 - Incorrect message from hot-plugging memory\n1821199 - HP VM fails to migrate between identical hosts (the same cpu flags) not supporting TSC. \n1845877 - [RFE] Collect information about RHV PKI\n1875363 - engine-setup failing on FIPS enabled rhel8 machine\n1906074 - [RFE] Support disks copy between regular and managed block storage domains\n1910858 - vm_ovf_generations is not cleared while detaching the storage domain causing VM import with old stale configuration\n1917718 - [RFE] Collect memory usage from guests without ovirt-guest-agent and memory ballooning\n1919195 - Unable to create snapshot without saving memory of running VM from VM Portal. \n1919984 - engine-setup failse to deploy the grafana service in an external DWH server\n1924610 - VM Portal shows N/A as the VM IP address even if the guest agent is running and the IP is shown in the webadmin portal\n1926018 - Failed to run VM after FIPS mode is enabled\n1926823 - Integrating ELK with RHV-4.4 fails as RHVH is missing \u0027rsyslog-gnutls\u0027 package. \n1928158 - Rename \u0027CA Certificate\u0027 link in welcome page to \u0027Engine CA certificate\u0027\n1928188 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929211 - Failed to parse \u0027writeOps\u0027 value \u0027XXXX\u0027 to integer: For input string: \"XXXX\"\n1930522 - [RHV-4.4.5.5] Failed to deploy RHEL AV 8.4.0 host to RHV with error \"missing groups or modules: virt:8.4\"\n1930565 - Host upgrade failed in imgbased but RHVM shows upgrade successful\n1930895 - RHEL 8 virtual machine with qemu-guest-agent installed displays Guest OS Memory Free/Cached/Buffered: Not Configured\n1932284 - Engine handled FS freeze is not fast enough for Windows systems\n1935073 - Ansible ovirt_disk module can create disks with conflicting IDs that cannot be removed\n1942083 - upgrade ovirt-cockpit-sso to 0.1.4-2\n1943267 - Snapshot creation is failing for VM having vGPU. \n1944723 - [RFE] Support virtual machines with 16TB memory\n1948577 - [welcome page] remove \"Infrastructure Migration\" section (obsoleted)\n1949543 - rhv-log-collector-analyzer fails to run MAC Pools rule\n1949547 - rhv-log-collector-analyzer report contains \u0027b characters\n1950348 - Set compatibility level 4.6 for Default DataCenter/Cluster during new installations of RHV 4.4.6\n1950466 - Host installation failed\n1954401 - HP VMs pinning is wiped after edit-\u003eok and pinned to first physical CPUs. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.8.2 bug fix and security update\nAdvisory ID: RHSA-2021:2438-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:2438\nIssue date: 2021-07-27\nCVE Names: CVE-2016-2183 CVE-2020-7774 CVE-2020-15106 \n CVE-2020-15112 CVE-2020-15113 CVE-2020-15114 \n CVE-2020-15136 CVE-2020-26160 CVE-2020-26541 \n CVE-2020-28469 CVE-2020-28500 CVE-2020-28852 \n CVE-2021-3114 CVE-2021-3121 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3537 CVE-2021-3541 CVE-2021-3636 \n CVE-2021-20206 CVE-2021-20271 CVE-2021-20291 \n CVE-2021-21419 CVE-2021-21623 CVE-2021-21639 \n CVE-2021-21640 CVE-2021-21648 CVE-2021-22133 \n CVE-2021-23337 CVE-2021-23362 CVE-2021-23368 \n CVE-2021-23382 CVE-2021-25735 CVE-2021-25737 \n CVE-2021-26539 CVE-2021-26540 CVE-2021-27292 \n CVE-2021-28092 CVE-2021-29059 CVE-2021-29622 \n CVE-2021-32399 CVE-2021-33034 CVE-2021-33194 \n CVE-2021-33909 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.8.2 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nThis release includes a security update for Red Hat OpenShift Container\nPlatform 4.8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.8.2. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2021:2437\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nSecurity Fix(es):\n\n* SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n(CVE-2016-2183)\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n\n* nodejs-y18n: prototype pollution vulnerability (CVE-2020-7774)\n\n* etcd: Large slice causes panic in decodeRecord method (CVE-2020-15106)\n\n* etcd: DoS in wal/wal.go (CVE-2020-15112)\n\n* etcd: directories created via os.MkdirAll are not checked for permissions\n(CVE-2020-15113)\n\n* etcd: gateway can include itself as an endpoint resulting in resource\nexhaustion and leads to DoS (CVE-2020-15114)\n\n* etcd: no authentication is performed against endpoints provided in the\n- --endpoints flag (CVE-2020-15136)\n\n* jwt-go: access restriction bypass vulnerability (CVE-2020-26160)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* golang: crypto/elliptic: incorrect operations on the P-224 curve\n(CVE-2021-3114)\n\n* containernetworking-cni: Arbitrary path injection via type field in CNI\nconfiguration (CVE-2021-20206)\n\n* containers/storage: DoS via malicious image (CVE-2021-20291)\n\n* prometheus: open redirect under the /new endpoint (CVE-2021-29622)\n\n* golang: x/net/html: infinite loop in ParseFragment (CVE-2021-33194)\n\n* go.elastic.co/apm: leaks sensitive HTTP headers during panic\n(CVE-2021-22133)\n\nSpace precludes listing in detail the following additional CVEs fixes:\n(CVE-2021-27292), (CVE-2021-28092), (CVE-2021-29059), (CVE-2021-23382),\n(CVE-2021-26539), (CVE-2021-26540), (CVE-2021-23337), (CVE-2021-23362) and\n(CVE-2021-23368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-x86_64\n\nThe image digest is\nssha256:0e82d17ababc79b10c10c5186920232810aeccbccf2a74c691487090a2c98ebc\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-s390x\n\nThe image digest is\nsha256:a284c5c3fa21b06a6a65d82be1dc7e58f378aa280acd38742fb167a26b91ecb5\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.8.2-ppc64le\n\nThe image digest is\nsha256:da989b8e28bccadbb535c2b9b7d3597146d14d254895cd35f544774f374cdd0f\n\nAll OpenShift Container Platform 4.8 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n3. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster\n- -cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1369383 - CVE-2016-2183 SSL/TLS: Birthday attack against 64-bit block ciphers (SWEET32)\n1725981 - oc explain does not work well with full resource.group names\n1747270 - [osp] Machine with name \"\u003ccluster-id\u003e-worker\"couldn\u0027t join the cluster\n1772993 - rbd block devices attached to a host are visible in unprivileged container pods\n1786273 - [4.6] KAS pod logs show \"error building openapi models ... has invalid property: anyOf\" for CRDs\n1786314 - [IPI][OSP] Install fails on OpenStack with self-signed certs unless the node running the installer has the CA cert in its system trusts\n1801407 - Router in v4v6 mode puts brackets around IPv4 addresses in the Forwarded header\n1812212 - ArgoCD example application cannot be downloaded from github\n1817954 - [ovirt] Workers nodes are not numbered sequentially\n1824911 - PersistentVolume yaml editor is read-only with system:persistent-volume-provisioner ClusterRole\n1825219 - openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1825417 - The containerruntimecontroller doesn\u0027t roll back to CR-1 if we delete CR-2\n1834551 - ClusterOperatorDown fires when operator is only degraded; states will block upgrades\n1835264 - Intree provisioner doesn\u0027t respect PVC.spec.dataSource sometimes\n1839101 - Some sidebar links in developer perspective don\u0027t follow same project\n1840881 - The KubeletConfigController cannot process multiple confs for a pool/ pool changes\n1846875 - Network setup test high failure rate\n1848151 - Console continues to poll the ClusterVersion resource when the user doesn\u0027t have authority\n1850060 - After upgrading to 3.11.219 timeouts are appearing. \n1852637 - Kubelet sets incorrect image names in node status images section\n1852743 - Node list CPU column only show usage\n1853467 - container_fs_writes_total is inconsistent with CPU/memory in summarizing cgroup values\n1857008 - [Edge] [BareMetal] Not provided STATE value for machines\n1857477 - Bad helptext for storagecluster creation\n1859382 - check-endpoints panics on graceful shutdown\n1862084 - Inconsistency of time formats in the OpenShift web-console\n1864116 - Cloud credential operator scrolls warnings about unsupported platform\n1866222 - Should output all options when runing `operator-sdk init --help`\n1866318 - [RHOCS Usability Study][Dashboard] Users found it difficult to navigate to the OCS dashboard\n1866322 - [RHOCS Usability Study][Dashboard] Alert details page does not help to explain the Alert\n1866331 - [RHOCS Usability Study][Dashboard] Users need additional tooltips or definitions\n1868755 - [vsphere] terraform provider vsphereprivate crashes when network is unavailable on host\n1868870 - CVE-2020-15113 etcd: directories created via os.MkdirAll are not checked for permissions\n1868872 - CVE-2020-15112 etcd: DoS in wal/wal.go\n1868874 - CVE-2020-15114 etcd: gateway can include itself as an endpoint resulting in resource exhaustion and leads to DoS\n1868880 - CVE-2020-15136 etcd: no authentication is performed against endpoints provided in the --endpoints flag\n1868883 - CVE-2020-15106 etcd: Large slice causes panic in decodeRecord method\n1871303 - [sig-instrumentation] Prometheus when installed on the cluster should have important platform topology metrics\n1871770 - [IPI baremetal] The Keepalived.conf file is not indented evenly\n1872659 - ClusterAutoscaler doesn\u0027t scale down when a node is not needed anymore\n1873079 - SSH to api and console route is possible when the clsuter is hosted on Openstack\n1873649 - proxy.config.openshift.io should validate user inputs\n1874322 - openshift/oauth-proxy: htpasswd using SHA1 to store credentials\n1874931 - Accessibility - Keyboard shortcut to exit YAML editor not easily discoverable\n1876918 - scheduler test leaves taint behind\n1878199 - Remove Log Level Normalization controller in cluster-config-operator release N+1\n1878655 - [aws-custom-region] creating manifests take too much time when custom endpoint is unreachable\n1878685 - Ingress resource with \"Passthrough\" annotation does not get applied when using the newer \"networking.k8s.io/v1\" API\n1879077 - Nodes tainted after configuring additional host iface\n1879140 - console auth errors not understandable by customers\n1879182 - switch over to secure access-token logging by default and delete old non-sha256 tokens\n1879184 - CVO must detect or log resource hotloops\n1879495 - [4.6] namespace \\\u201copenshift-user-workload-monitoring\\\u201d does not exist\u201d\n1879638 - Binary file uploaded to a secret in OCP 4 GUI is not properly converted to Base64-encoded string\n1879944 - [OCP 4.8] Slow PV creation with vsphere\n1880757 - AWS: master not removed from LB/target group when machine deleted\n1880758 - Component descriptions in cloud console have bad description (Managed by Terraform)\n1881210 - nodePort for router-default metrics with NodePortService does not exist\n1881481 - CVO hotloops on some service manifests\n1881484 - CVO hotloops on deployment manifests\n1881514 - CVO hotloops on imagestreams from cluster-samples-operator\n1881520 - CVO hotloops on (some) clusterrolebindings\n1881522 - CVO hotloops on clusterserviceversions packageserver\n1881662 - Error getting volume limit for plugin kubernetes.io/\u003cname\u003e in kubelet logs\n1881694 - Evidence of disconnected installs pulling images from the local registry instead of quay.io\n1881938 - migrator deployment doesn\u0027t tolerate masters\n1883371 - CVE-2020-26160 jwt-go: access restriction bypass vulnerability\n1883587 - No option for user to select volumeMode\n1883993 - Openshift 4.5.8 Deleting pv disk vmdk after delete machine\n1884053 - cluster DNS experiencing disruptions during cluster upgrade in insights cluster\n1884800 - Failed to set up mount unit: Invalid argument\n1885186 - Removing ssh keys MC does not remove the key from authorized_keys\n1885349 - [IPI Baremetal] Proxy Information Not passed to metal3\n1885717 - activeDeadlineSeconds DeadlineExceeded does not show terminated container statuses\n1886572 - auth: error contacting auth provider when extra ingress (not default) goes down\n1887849 - When creating new storage class failure_domain is missing. \n1888712 - Worker nodes do not come up on a baremetal IPI deployment with control plane network configured on a vlan on top of bond interface due to Pending CSRs\n1889689 - AggregatedAPIErrors alert may never fire\n1890678 - Cypress: Fix \u0027structure\u0027 accesibility violations\n1890828 - Intermittent prune job failures causing operator degradation\n1891124 - CP Conformance: CRD spec and status failures\n1891301 - Deleting bmh by \"oc delete bmh\u0027 get stuck\n1891696 - [LSO] Add capacity UI does not check for node present in selected storageclass\n1891766 - [LSO] Min-Max filter\u0027s from OCS wizard accepts Negative values and that cause PV not getting created\n1892642 - oauth-server password metrics do not appear in UI after initial OCP installation\n1892718 - HostAlreadyClaimed: The new route cannot be loaded with a new api group version\n1893850 - Add an alert for requests rejected by the apiserver\n1893999 - can\u0027t login ocp cluster with oc 4.7 client without the username\n1895028 - [gcp-pd-csi-driver-operator] Volumes created by CSI driver are not deleted on cluster deletion\n1895053 - Allow builds to optionally mount in cluster trust stores\n1896226 - recycler-pod template should not be in kubelet static manifests directory\n1896321 - MachineSet scaling from 0 is not available or evaluated incorrectly for the new or changed instance types\n1896751 - [RHV IPI] Worker nodes stuck in the Provisioning Stage if the machineset has a long name\n1897415 - [Bare Metal - Ironic] provide the ability to set the cipher suite for ipmitool when doing a Bare Metal IPI install\n1897621 - Auth test.Login test.logs in as kubeadmin user: Timeout\n1897918 - [oVirt] e2e tests fail due to kube-apiserver not finishing\n1898680 - CVE-2020-7774 nodejs-y18n: prototype pollution vulnerability\n1899057 - fix spurious br-ex MAC address error log\n1899187 - [Openstack] node-valid-hostname.service failes during the first boot leading to 5 minute provisioning delay\n1899587 - [External] RGW usage metrics shown on Object Service Dashboard is incorrect\n1900454 - Enable host-based disk encryption on Azure platform\n1900819 - Scaled ingress replicas following sharded pattern don\u0027t balance evenly across multi-AZ\n1901207 - Search Page - Pipeline resources table not immediately updated after Name filter applied or removed\n1901535 - Remove the managingOAuthAPIServer field from the authentication.operator API\n1901648 - \"do you need to set up custom dns\" tooltip inaccurate\n1902003 - Jobs Completions column is not sorting when there are \"0 of 1\" and \"1 of 1\" in the list. \n1902076 - image registry operator should monitor status of its routes\n1902247 - openshift-oauth-apiserver apiserver pod crashloopbackoffs\n1903055 - [OSP] Validation should fail when no any IaaS flavor or type related field are given\n1903228 - Pod stuck in Terminating, runc init process frozen\n1903383 - Latest RHCOS 47.83. builds failing to install: mount /root.squashfs failed\n1903553 - systemd container renders node NotReady after deleting it\n1903700 - metal3 Deployment doesn\u0027t have unique Pod selector\n1904006 - The --dir option doest not work for command `oc image extract`\n1904505 - Excessive Memory Use in Builds\n1904507 - vsphere-problem-detector: implement missing metrics\n1904558 - Random init-p error when trying to start pod\n1905095 - Images built on OCP 4.6 clusters create manifests that result in quay.io (and other registries) rejecting those manifests\n1905147 - ConsoleQuickStart Card\u0027s prerequisites is a combined text instead of a list\n1905159 - Installation on previous unused dasd fails after formatting\n1905331 - openshift-multus initContainer multus-binary-copy, etc. are not requesting required resources: cpu, memory\n1905460 - Deploy using virtualmedia for disabled provisioning network on real BM(HPE) fails\n1905577 - Control plane machines not adopted when provisioning network is disabled\n1905627 - Warn users when using an unsupported browser such as IE\n1905709 - Machine API deletion does not properly handle stopped instances on AWS or GCP\n1905849 - Default volumesnapshotclass should be created when creating default storageclass\n1906056 - Bundles skipped via the `skips` field cannot be pinned\n1906102 - CBO produces standard metrics\n1906147 - ironic-rhcos-downloader should not use --insecure\n1906304 - Unexpected value NaN parsing x/y attribute when viewing pod Memory/CPU usage chart\n1906740 - [aws]Machine should be \"Failed\" when creating a machine with invalid region\n1907309 - Migrate controlflow v1alpha1 to v1beta1 in storage\n1907315 - the internal load balancer annotation for AWS should use \"true\" instead of \"0.0.0.0/0\" as value\n1907353 - [4.8] OVS daemonset is wasting resources even though it doesn\u0027t do anything\n1907614 - Update kubernetes deps to 1.20\n1908068 - Enable DownwardAPIHugePages feature gate\n1908169 - The example of Import URL is \"Fedora cloud image list\" for all templates. \n1908170 - sriov network resource injector: Hugepage injection doesn\u0027t work with mult container\n1908343 - Input labels in Manage columns modal should be clickable\n1908378 - [sig-network] pods should successfully create sandboxes by getting pod - Static Pod Failures\n1908655 - \"Evaluating rule failed\" for \"record: node:node_num_cpu:sum\" rule\n1908762 - [Dualstack baremetal cluster] multicast traffic is not working on ovn-kubernetes\n1908765 - [SCALE] enable OVN lflow data path groups\n1908774 - [SCALE] enable OVN DB memory trimming on compaction\n1908916 - CNO: turn on OVN DB RAFT diffs once all master DB pods are capable of it\n1909091 - Pod/node/ip/template isn\u0027t showing when vm is running\n1909600 - Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1909849 - release-openshift-origin-installer-e2e-aws-upgrade-fips-4.4 is perm failing\n1909875 - [sig-cluster-lifecycle] Cluster version operator acknowledges upgrade : timed out waiting for cluster to acknowledge upgrade\n1910067 - UPI: openstacksdk fails on \"server group list\"\n1910113 - periodic-ci-openshift-release-master-ocp-4.5-ci-e2e-44-stable-to-45-ci is never passing\n1910318 - OC 4.6.9 Installer failed: Some pods are not scheduled: 3 node(s) didn\u0027t match node selector: AWS compute machines without status\n1910378 - socket timeouts for webservice communication between pods\n1910396 - 4.6.9 cred operator should back-off when provisioning fails on throttling\n1910500 - Could not list CSI provisioner on web when create storage class on GCP platform\n1911211 - Should show the cert-recovery-controller version correctly\n1911470 - ServiceAccount Registry Authfiles Do Not Contain Entries for Public Hostnames\n1912571 - libvirt: Support setting dnsmasq options through the install config\n1912820 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1913112 - BMC details should be optional for unmanaged hosts\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913341 - GCP: strange cluster behavior in CI run\n1913399 - switch to v1beta1 for the priority and fairness APIs\n1913525 - Panic in OLM packageserver when invoking webhook authorization endpoint\n1913532 - After a 4.6 to 4.7 upgrade, a node went unready\n1913974 - snapshot test periodically failing with \"can\u0027t open \u0027/mnt/test/data\u0027: No such file or directory\"\n1914127 - Deletion of oc get svc router-default -n openshift-ingress hangs\n1914446 - openshift-service-ca-operator and openshift-service-ca pods run as root\n1914994 - Panic observed in k8s-prometheus-adapter since k8s 1.20\n1915122 - Size of the hostname was preventing proper DNS resolution of the worker node names\n1915693 - Not able to install gpu-operator on cpumanager enabled node. \n1915971 - Role and Role Binding breadcrumbs do not work as expected\n1916116 - the left navigation menu would not be expanded if repeat clicking the links in Overview page\n1916118 - [OVN] Source IP is not EgressIP if configured allow 0.0.0.0/0 in the EgressFirewall\n1916392 - scrape priority and fairness endpoints for must-gather\n1916450 - Alertmanager: add title and text fields to Adv. config. section of Slack Receiver form\n1916489 - [sig-scheduling] SchedulerPriorities [Serial] fails with \"Error waiting for 1 pods to be running - probably a timeout: Timeout while waiting for pods with labels to be ready\"\n1916553 - Default template\u0027s description is empty on details tab\n1916593 - Destroy cluster sometimes stuck in a loop\n1916872 - need ability to reconcile exgw annotations on pod add\n1916890 - [OCP 4.7] api or api-int not available during installation\n1917241 - [en_US] The tooltips of Created date time is not easy to read in all most of UIs. \n1917282 - [Migration] MCO stucked for rhel worker after enable the migration prepare state\n1917328 - It should default to current namespace when create vm from template action on details page\n1917482 - periodic-ci-openshift-release-master-ocp-4.7-e2e-metal-ipi failing with \"cannot go from state \u0027deploy failed\u0027 to state \u0027manageable\u0027\"\n1917485 - [oVirt] ovirt machine/machineset object has missing some field validations\n1917667 - Master machine config pool updates are stalled during the migration from SDN to OVNKube. \n1917906 - [oauth-server] bump k8s.io/apiserver to 1.20.3\n1917931 - [e2e-gcp-upi] failing due to missing pyopenssl library\n1918101 - [vsphere]Delete Provisioning machine took about 12 minutes\n1918376 - Image registry pullthrough does not support ICSP, mirroring e2es do not pass\n1918442 - Service Reject ACL does not work on dualstack\n1918723 - installer fails to write boot record on 4k scsi lun on s390x\n1918729 - Add hide/reveal button for the token field in the KMS configuration page\n1918750 - CVE-2021-3114 golang: crypto/elliptic: incorrect operations on the P-224 curve\n1918785 - Pod request and limit calculations in console are incorrect\n1918910 - Scale from zero annotations should not requeue if instance type missing\n1919032 - oc image extract - will not extract files from image rootdir - \"error: unexpected directory from mapping tests.test\"\n1919048 - Whereabouts IPv6 addresses not calculated when leading hextets equal 0\n1919151 - [Azure] dnsrecords with invalid domain should not be published to Azure dnsZone\n1919168 - `oc adm catalog mirror` doesn\u0027t work for the air-gapped cluster\n1919291 - [Cinder-csi-driver] Filesystem did not expand for on-line volume resize\n1919336 - vsphere-problem-detector should check if datastore is part of datastore cluster\n1919356 - Add missing profile annotation in cluster-update-keys manifests\n1919391 - CVE-2021-20206 containernetworking-cni: Arbitrary path injection via type field in CNI configuration\n1919398 - Permissive Egress NetworkPolicy (0.0.0.0/0) is blocking all traffic\n1919406 - OperatorHub filter heading \"Provider Type\" should be \"Source\"\n1919737 - hostname lookup delays when master node down\n1920209 - Multus daemonset upgrade takes the longest time in the cluster during an upgrade\n1920221 - GCP jobs exhaust zone listing query quota sometimes due to too many initializations of cloud provider in tests\n1920300 - cri-o does not support configuration of stream idle time\n1920307 - \"VM not running\" should be \"Guest agent required\" on vm details page in dev console\n1920532 - Problem in trying to connect through the service to a member that is the same as the caller. \n1920677 - Various missingKey errors in the devconsole namespace\n1920699 - Operation cannot be fulfilled on clusterresourcequotas.quota.openshift.io error when creating different OpenShift resources\n1920901 - [4.7]\"500 Internal Error\" for prometheus route in https_proxy cluster\n1920903 - oc adm top reporting unknown status for Windows node\n1920905 - Remove DNS lookup workaround from cluster-api-provider\n1921106 - A11y Violation: button name(s) on Utilization Card on Cluster Dashboard\n1921184 - kuryr-cni binds to wrong interface on machine with two interfaces\n1921227 - Fix issues related to consuming new extensions in Console static plugins\n1921264 - Bundle unpack jobs can hang indefinitely\n1921267 - ResourceListDropdown not internationalized\n1921321 - SR-IOV obliviously reboot the node\n1921335 - ThanosSidecarUnhealthy\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1921720 - test: openshift-tests.[sig-cli] oc observe works as expected [Suite:openshift/conformance/parallel]\n1921763 - operator registry has high memory usage in 4.7... cleanup row closes\n1921778 - Push to stage now failing with semver issues on old releases\n1921780 - Search page not fully internationalized\n1921781 - DefaultList component not internationalized\n1921878 - [kuryr] Egress network policy with namespaceSelector in Kuryr behaves differently than in OVN-Kubernetes\n1921885 - Server-side Dry-run with Validation Downloads Entire OpenAPI spec often\n1921892 - MAO: controller runtime manager closes event recorder\n1921894 - Backport Avoid node disruption when kube-apiserver-to-kubelet-signer is rotated\n1921937 - During upgrade /etc/hostname becomes a directory, nodes are set with kubernetes.io/hostname=localhost label\n1921953 - ClusterServiceVersion property inference does not infer package and version\n1922063 - \"Virtual Machine\" should be \"Templates\" in template wizard\n1922065 - Rootdisk size is default to 15GiB in customize wizard\n1922235 - [build-watch] e2e-aws-upi - e2e-aws-upi container setup failing because of Python code version mismatch\n1922264 - Restore snapshot as a new PVC: RWO/RWX access modes are not click-able if parent PVC is deleted\n1922280 - [v2v] on the upstream release, In VM import wizard I see RHV but no oVirt\n1922646 - Panic in authentication-operator invoking webhook authorization\n1922648 - FailedCreatePodSandBox due to \"failed to pin namespaces [uts]: [pinns:e]: /var/run/utsns exists and is not a directory: File exists\"\n1922764 - authentication operator is degraded due to number of kube-apiservers\n1922992 - some button text on YAML sidebar are not translated\n1922997 - [Migration]The SDN migration rollback failed. \n1923038 - [OSP] Cloud Info is loaded twice\n1923157 - Ingress traffic performance drop due to NodePort services\n1923786 - RHV UPI fails with unhelpful message when ASSET_DIR is not set. \n1923811 - Registry claims Available=True despite .status.readyReplicas == 0 while .spec.replicas == 2\n1923847 - Error occurs when creating pods if configuring multiple key-only labels in default cluster-wide node selectors or project-wide node selectors\n1923984 - Incorrect anti-affinity for UWM prometheus\n1924020 - panic: runtime error: index out of range [0] with length 0\n1924075 - kuryr-controller restart when enablePortPoolsPrepopulation = true\n1924083 - \"Activity\" Pane of Persistent Storage tab shows events related to Noobaa too\n1924140 - [OSP] Typo in OPENSHFIT_INSTALL_SKIP_PREFLIGHT_VALIDATIONS variable\n1924171 - ovn-kube must handle single-stack to dual-stack migration\n1924358 - metal UPI setup fails, no worker nodes\n1924502 - Failed to start transient scope unit: Argument list too long / systemd[1]: Failed to set up mount unit: Invalid argument\n1924536 - \u0027More about Insights\u0027 link points to support link\n1924585 - \"Edit Annotation\" are not correctly translated in Chinese\n1924586 - Control Plane status and Operators status are not fully internationalized\n1924641 - [User Experience] The message \"Missing storage class\" needs to be displayed after user clicks Next and needs to be rephrased\n1924663 - Insights operator should collect related pod logs when operator is degraded\n1924701 - Cluster destroy fails when using byo with Kuryr\n1924728 - Difficult to identify deployment issue if the destination disk is too small\n1924729 - Create Storageclass for CephFS provisioner assumes incorrect default FSName in external mode (side-effect of fix for Bug 1878086)\n1924747 - InventoryItem doesn\u0027t internationalize resource kind\n1924788 - Not clear error message when there are no NADs available for the user\n1924816 - Misleading error messages in ironic-conductor log\n1924869 - selinux avc deny after installing OCP 4.7\n1924916 - PVC reported as Uploading when it is actually cloning\n1924917 - kuryr-controller in crash loop if IP is removed from secondary interfaces\n1924953 - newly added \u0027excessive etcd leader changes\u0027 test case failing in serial job\n1924968 - Monitoring list page filter options are not translated\n1924983 - some components in utils directory not localized\n1925017 - [UI] VM Details-\u003e Network Interfaces, \u0027Name,\u0027 is displayed instead on \u0027Name\u0027\n1925061 - Prometheus backed by a PVC may start consuming a lot of RAM after 4.6 -\u003e 4.7 upgrade due to series churn\n1925083 - Some texts are not marked for translation on idp creation page. \n1925087 - Add i18n support for the Secret page\n1925148 - Shouldn\u0027t create the redundant imagestream when use `oc new-app --name=testapp2 -i ` with exist imagestream\n1925207 - VM from custom template - cloudinit disk is not added if creating the VM from custom template using customization wizard\n1925216 - openshift installer fails immediately failed to fetch Install Config\n1925236 - OpenShift Route targets every port of a multi-port service\n1925245 - oc idle: Clusters upgrading with an idled workload do not have annotations on the workload\u0027s service\n1925261 - Items marked as mandatory in KMS Provider form are not enforced\n1925291 - Baremetal IPI - While deploying with IPv6 provision network with subnet other than /64 masters fail to PXE boot\n1925343 - [ci] e2e-metal tests are not using reserved instances\n1925493 - Enable snapshot e2e tests\n1925586 - cluster-etcd-operator is leaking transports\n1925614 - Error: InstallPlan.operators.coreos.com not found\n1925698 - On GCP, load balancers report kube-apiserver fails its /readyz check 50% of the time, causing load balancer backend churn and disruptions to apiservers\n1926029 - [RFE] Either disable save or give warning when no disks support snapshot\n1926054 - Localvolume CR is created successfully, when the storageclass name defined in the localvolume exists. \n1926072 - Close button (X) does not work in the new \"Storage cluster exists\" Warning alert message(introduced via fix for Bug 1867400)\n1926082 - Insights operator should not go degraded during upgrade\n1926106 - [ja_JP][zh_CN] Create Project, Delete Project and Delete PVC modal are not fully internationalized\n1926115 - Texts in \u201cInsights\u201d popover on overview page are not marked for i18n\n1926123 - Pseudo bug: revert \"force cert rotation every couple days for development\" in 4.7\n1926126 - some kebab/action menu translation issues\n1926131 - Add HPA page is not fully internationalized\n1926146 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it\n1926154 - Create new pool with arbiter - wrong replica\n1926278 - [oVirt] consume K8S 1.20 packages\n1926279 - Pod ignores mtu setting from sriovNetworkNodePolicies in case of PF partitioning\n1926285 - ignore pod not found status messages\n1926289 - Accessibility: Modal content hidden from screen readers\n1926310 - CannotRetrieveUpdates alerts on Critical severity\n1926329 - [Assisted-4.7][Staging] monitoring stack in staging is being overloaded by the amount of metrics being exposed by assisted-installer pods and scraped by prometheus. \n1926336 - Service details can overflow boxes at some screen widths\n1926346 - move to go 1.15 and registry.ci.openshift.org\n1926364 - Installer timeouts because proxy blocked connection to Ironic API running on bootstrap VM\n1926465 - bootstrap kube-apiserver does not have --advertise-address set \u2013 was: [BM][IPI][DualStack] Installation fails cause Kubernetes service doesn\u0027t have IPv6 endpoints\n1926484 - API server exits non-zero on 2 SIGTERM signals\n1926547 - OpenShift installer not reporting IAM permission issue when removing the Shared Subnet Tag\n1926579 - Setting .spec.policy is deprecated and will be removed eventually. Please use .spec.profile instead is being logged every 3 seconds in scheduler operator log\n1926598 - Duplicate alert rules are displayed on console for thanos-querier api return wrong results\n1926776 - \"Template support\" modal appears when select the RHEL6 common template\n1926835 - [e2e][automation] prow gating use unsupported CDI version\n1926843 - pipeline with finally tasks status is improper\n1926867 - openshift-apiserver Available is False with 3 pods not ready for a while during upgrade\n1926893 - When deploying the operator via OLM (after creating the respective catalogsource), the deployment \"lost\" the `resources` section. \n1926903 - NTO may fail to disable stalld when relying on Tuned \u0027[service]\u0027 plugin\n1926931 - Inconsistent ovs-flow rule on one of the app node for egress node\n1926943 - vsphere-problem-detector: Alerts in CI jobs\n1926977 - [sig-devex][Feature:ImageEcosystem][Slow] openshift sample application repositories rails/nodejs\n1927013 - Tables don\u0027t render properly at smaller screen widths\n1927017 - CCO does not relinquish leadership when restarting for proxy CA change\n1927042 - Empty static pod files on UPI deployments are confusing\n1927047 - multiple external gateway pods will not work in ingress with IP fragmentation\n1927068 - Workers fail to PXE boot when IPv6 provisionining network has subnet other than /64\n1927075 - [e2e][automation] Fix pvc string in pvc.view\n1927118 - OCP 4.7: NVIDIA GPU Operator DCGM metrics not displayed in OpenShift Console Monitoring Metrics page\n1927244 - UPI installation with Kuryr timing out on bootstrap stage\n1927263 - kubelet service takes around 43 secs to start container when started from stopped state\n1927264 - FailedCreatePodSandBox due to multus inability to reach apiserver\n1927310 - Performance: Console makes unnecessary requests for en-US messages on load\n1927340 - Race condition in OperatorCondition reconcilation\n1927366 - OVS configuration service unable to clone NetworkManager\u0027s connections in the overlay FS\n1927391 - Fix flake in TestSyncPodsDeletesWhenSourcesAreReady\n1927393 - 4.7 still points to 4.6 catalog images\n1927397 - p\u0026f: add auto update for priority \u0026 fairness bootstrap configuration objects\n1927423 - Happy \"Not Found\" and no visible error messages on error-list page when /silences 504s\n1927465 - Homepage dashboard content not internationalized\n1927678 - Reboot interface defaults to softPowerOff so fencing is too slow\n1927731 - /usr/lib/dracut/modules.d/30ignition/ignition --version sigsev\n1927797 - \u0027Pod(s)\u0027 should be included in the pod donut label when a horizontal pod autoscaler is enabled\n1927882 - Can\u0027t create cluster role binding from UI when a project is selected\n1927895 - global RuntimeConfig is overwritten with merge result\n1927898 - i18n Admin Notifier\n1927902 - i18n Cluster Utilization dashboard duration\n1927903 - \"CannotRetrieveUpdates\" - critical error in openshift web console\n1927925 - Manually misspelled as Manualy\n1927941 - StatusDescriptor detail item and Status component can cause runtime error when the status is an object or array\n1927942 - etcd should use socket option (SO_REUSEADDR) instead of wait for port release on process restart\n1927944 - cluster version operator cycles terminating state waiting for leader election\n1927993 - Documentation Links in OKD Web Console are not Working\n1928008 - Incorrect behavior when we click back button after viewing the node details in Internal-attached mode\n1928045 - N+1 scaling Info message says \"single zone\" even if the nodes are spread across 2 or 0 zones\n1928147 - Domain search set in the required domains in Option 119 of DHCP Server is ignored by RHCOS on RHV\n1928157 - 4.7 CNO claims to be done upgrading before it even starts\n1928164 - Traffic to outside the cluster redirected when OVN is used and NodePort service is configured\n1928297 - HAProxy fails with 500 on some requests\n1928473 - NetworkManager overlay FS not being created on None platform\n1928512 - sap license management logs gatherer\n1928537 - Cannot IPI with tang/tpm disk encryption\n1928640 - Definite error message when using StorageClass based on azure-file / Premium_LRS\n1928658 - Update plugins and Jenkins version to prepare openshift-sync-plugin 1.0.46 release\n1928850 - Unable to pull images due to limited quota on Docker Hub\n1928851 - manually creating NetNamespaces will break things and this is not obvious\n1928867 - golden images - DV should not be created with WaitForFirstConsumer\n1928869 - Remove css required to fix search bug in console caused by pf issue in 2021.1\n1928875 - Update translations\n1928893 - Memory Pressure Drop Down Info is stating \"Disk\" capacity is low instead of memory\n1928931 - DNSRecord CRD is using deprecated v1beta1 API\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1929052 - Add new Jenkins agent maven dir for 3.6\n1929056 - kube-apiserver-availability.rules are failing evaluation\n1929110 - LoadBalancer service check test fails during vsphere upgrade\n1929136 - openshift isn\u0027t able to mount nfs manila shares to pods\n1929175 - LocalVolumeSet: PV is created on disk belonging to other provisioner\n1929243 - Namespace column missing in Nodes Node Details / pods tab\n1929277 - Monitoring workloads using too high a priorityclass\n1929281 - Update Tech Preview badge to transparent border color when upgrading to PatternFly v4.87.1\n1929314 - ovn-kubernetes endpoint slice controller doesn\u0027t run on CI jobs\n1929359 - etcd-quorum-guard uses origin-cli [4.8]\n1929577 - Edit Application action overwrites Deployment envFrom values on save\n1929654 - Registry for Azure uses legacy V1 StorageAccount\n1929693 - Pod stuck at \"ContainerCreating\" status\n1929733 - oVirt CSI driver operator is constantly restarting\n1929769 - Getting 404 after switching user perspective in another tab and reload Project details\n1929803 - Pipelines shown in edit flow for Workloads created via ContainerImage flow\n1929824 - fix alerting on volume name check for vsphere\n1929917 - Bare-metal operator is firing for ClusterOperatorDown for 15m during 4.6 to 4.7 upgrade\n1929944 - The etcdInsufficientMembers alert fires incorrectly when any instance is down and not when quorum is lost\n1930007 - filter dropdown item filter and resource list dropdown item filter doesn\u0027t support multi selection\n1930015 - OS list is overlapped by buttons in template wizard\n1930064 - Web console crashes during VM creation from template when no storage classes are defined\n1930220 - Cinder CSI driver is not able to mount volumes under heavier load\n1930240 - Generated clouds.yaml incomplete when provisioning network is disabled\n1930248 - After creating a remediation flow and rebooting a worker there is no access to the openshift-web-console\n1930268 - intel vfio devices are not expose as resources\n1930356 - Darwin binary missing from mirror.openshift.com\n1930393 - Gather info about unhealthy SAP pods\n1930546 - Monitoring-dashboard-workload keep loading when user with cluster-role cluster-monitoring-view login develoer console\n1930570 - Jenkins templates are displayed in Developer Catalog twice\n1930620 - the logLevel field in containerruntimeconfig can\u0027t be set to \"trace\"\n1930631 - Image local-storage-mustgather in the doc does not come from product registry\n1930893 - Backport upstream patch 98956 for pod terminations\n1931005 - Related objects page doesn\u0027t show the object when its name is empty\n1931103 - remove periodic log within kubelet\n1931115 - Azure cluster install fails with worker type workers Standard_D4_v2\n1931215 - [RFE] Cluster-api-provider-ovirt should handle affinity groups\n1931217 - [RFE] Installer should create RHV Affinity group for OCP cluster VMS\n1931467 - Kubelet consuming a large amount of CPU and memory and node becoming unhealthy\n1931505 - [IPI baremetal] Two nodes hold the VIP post remove and start of the Keepalived container\n1931522 - Fresh UPI install on BM with bonding using OVN Kubernetes fails\n1931529 - SNO: mentioning of 4 nodes in error message - Cluster network CIDR prefix 24 does not contain enough addresses for 4 hosts each one with 25 prefix (128 addresses)\n1931629 - Conversational Hub Fails due to ImagePullBackOff\n1931637 - Kubeturbo Operator fails due to ImagePullBackOff\n1931652 - [single-node] etcd: discover-etcd-initial-cluster graceful termination race. \n1931658 - [single-node] cluster-etcd-operator: cluster never pivots from bootstrapIP endpoint\n1931674 - [Kuryr] Enforce nodes MTU for the Namespaces and Pods\n1931852 - Ignition HTTP GET is failing, because DHCP IPv4 config is failing silently\n1931883 - Fail to install Volume Expander Operator due to CrashLookBackOff\n1931949 - Red Hat Integration Camel-K Operator keeps stuck in Pending state\n1931974 - Operators cannot access kubeapi endpoint on OVNKubernetes on ipv6\n1931997 - network-check-target causes upgrade to fail from 4.6.18 to 4.7\n1932001 - Only one of multiple subscriptions to the same package is honored\n1932097 - Apiserver liveness probe is marking it as unhealthy during normal shutdown\n1932105 - machine-config ClusterOperator claims level while control-plane still updating\n1932133 - AWS EBS CSI Driver doesn\u2019t support \u201ccsi.storage.k8s.io/fsTyps\u201d parameter\n1932135 - When \u201ciopsPerGB\u201d parameter is not set, event for AWS EBS CSI Driver provisioning is not clear\n1932152 - When \u201ciopsPerGB\u201d parameter is set to a wrong number, events for AWS EBS CSI Driver provisioning are not clear\n1932154 - [AWS ] machine stuck in provisioned phase , no warnings or errors\n1932182 - catalog operator causing CPU spikes and bad etcd performance\n1932229 - Can\u2019t find kubelet metrics for aws ebs csi volumes\n1932281 - [Assisted-4.7][UI] Unable to change upgrade channel once upgrades were discovered\n1932323 - CVE-2021-26540 sanitize-html: improper validation of hostnames set by the \"allowedIframeHostnames\" option can lead to bypass hostname whitelist for iframe element\n1932324 - CRIO fails to create a Pod in sandbox stage - starting container process caused: process_linux.go:472: container init caused: Running hook #0:: error running hook: exit status 255, stdout: , stderr: \\\"\\n\"\n1932362 - CVE-2021-26539 sanitize-html: improper handling of internationalized domain name (IDN) can lead to bypass hostname whitelist validation\n1932401 - Cluster Ingress Operator degrades if external LB redirects http to https because of new \"canary\" route\n1932453 - Update Japanese timestamp format\n1932472 - Edit Form/YAML switchers cause weird collapsing/code-folding issue\n1932487 - [OKD] origin-branding manifest is missing cluster profile annotations\n1932502 - Setting MTU for a bond interface using Kernel arguments is not working\n1932618 - Alerts during a test run should fail the test job, but were not\n1932624 - ClusterMonitoringOperatorReconciliationErrors is pending at the end of an upgrade and probably should not be\n1932626 - During a 4.8 GCP upgrade OLM fires an alert indicating the operator is unhealthy\n1932673 - Virtual machine template provided by red hat should not be editable. The UI allows to edit and then reverse the change after it was made\n1932789 - Proxy with port is unable to be validated if it overlaps with service/cluster network\n1932799 - During a hive driven baremetal installation the process does not go beyond 80% in the bootstrap VM\n1932805 - e2e: test OAuth API connections in the tests by that name\n1932816 - No new local storage operator bundle image is built\n1932834 - enforce the use of hashed access/authorize tokens\n1933101 - Can not upgrade a Helm Chart that uses a library chart in the OpenShift dev console\n1933102 - Canary daemonset uses default node selector\n1933114 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should be able to connect to a service that is idled because a GET on the route will unidle it [Suite:openshift/conformance/parallel/minimal]\n1933159 - multus DaemonSets should use maxUnavailable: 33%\n1933173 - openshift-sdn/sdn DaemonSet should use maxUnavailable: 10%\n1933174 - openshift-sdn/ovs DaemonSet should use maxUnavailable: 10%\n1933179 - network-check-target DaemonSet should use maxUnavailable: 10%\n1933180 - openshift-image-registry/node-ca DaemonSet should use maxUnavailable: 10%\n1933184 - openshift-cluster-csi-drivers DaemonSets should use maxUnavailable: 10%\n1933263 - user manifest with nodeport services causes bootstrap to block\n1933269 - Cluster unstable replacing an unhealthy etcd member\n1933284 - Samples in CRD creation are ordered arbitarly\n1933414 - Machines are created with unexpected name for Ports\n1933599 - bump k8s.io/apiserver to 1.20.3\n1933630 - [Local Volume] Provision disk failed when disk label has unsupported value like \":\"\n1933664 - Getting Forbidden for image in a container template when creating a sample app\n1933708 - Grafana is not displaying deployment config resources in dashboard `Default /Kubernetes / Compute Resources / Namespace (Workloads)`\n1933711 - EgressDNS: Keep short lived records at most 30s\n1933730 - [AI-UI-Wizard] Toggling \"Use extra disks for local storage\" checkbox highlights the \"Next\" button to move forward but grays out once clicked\n1933761 - Cluster DNS service caps TTLs too low and thus evicts from its cache too aggressively\n1933772 - MCD Crash Loop Backoff\n1933805 - TargetDown alert fires during upgrades because of normal upgrade behavior\n1933857 - Details page can throw an uncaught exception if kindObj prop is undefined\n1933880 - Kuryr-Controller crashes when it\u0027s missing the status object\n1934021 - High RAM usage on machine api termination node system oom\n1934071 - etcd consuming high amount of memory and CPU after upgrade to 4.6.17\n1934080 - Both old and new Clusterlogging CSVs stuck in Pending during upgrade\n1934085 - Scheduling conformance tests failing in a single node cluster\n1934107 - cluster-authentication-operator builds URL incorrectly for IPv6\n1934112 - Add memory and uptime metadata to IO archive\n1934113 - mcd panic when there\u0027s not enough free disk space\n1934123 - [OSP] First public endpoint is used to fetch ignition config from Glance URL (with multiple endpoints) on OSP\n1934163 - Thanos Querier restarting and gettin alert ThanosQueryHttpRequestQueryRangeErrorRateHigh\n1934174 - rootfs too small when enabling NBDE\n1934176 - Machine Config Operator degrades during cluster update with failed to convert Ignition config spec v2 to v3\n1934177 - knative-camel-operator CreateContainerError \"container_linux.go:366: starting container process caused: chdir to cwd (\\\"/home/nonroot\\\") set in config.json failed: permission denied\"\n1934216 - machineset-controller stuck in CrashLoopBackOff after upgrade to 4.7.0\n1934229 - List page text filter has input lag\n1934397 - Extend OLM operator gatherer to include Operator/ClusterServiceVersion conditions\n1934400 - [ocp_4][4.6][apiserver-auth] OAuth API servers are not ready - PreconditionNotReady\n1934516 - Setup different priority classes for prometheus-k8s and prometheus-user-workload pods\n1934556 - OCP-Metal images\n1934557 - RHCOS boot image bump for LUKS fixes\n1934643 - Need BFD failover capability on ECMP routes\n1934711 - openshift-ovn-kubernetes ovnkube-node DaemonSet should use maxUnavailable: 10%\n1934773 - Canary client should perform canary probes explicitly over HTTPS (rather than redirect from HTTP)\n1934905 - CoreDNS\u0027s \"errors\" plugin is not enabled for custom upstream resolvers\n1935058 - Can\u2019t finish install sts clusters on aws government region\n1935102 - Error: specifying a root certificates file with the insecure flag is not allowed during oc login\n1935155 - IGMP/MLD packets being dropped\n1935157 - [e2e][automation] environment tests broken\n1935165 - OCP 4.6 Build fails when filename contains an umlaut\n1935176 - Missing an indication whether the deployed setup is SNO. \n1935269 - Topology operator group shows child Jobs. Not shown in details view\u0027s resources. \n1935419 - Failed to scale worker using virtualmedia on Dell R640\n1935528 - [AWS][Proxy] ingress reports degrade with CanaryChecksSucceeding=False in the cluster with proxy setting\n1935539 - Openshift-apiserver CO unavailable during cluster upgrade from 4.6 to 4.7\n1935541 - console operator panics in DefaultDeployment with nil cm\n1935582 - prometheus liveness probes cause issues while replaying WAL\n1935604 - high CPU usage fails ingress controller\n1935667 - pipelinerun status icon rendering issue\n1935706 - test: Detect when the master pool is still updating after upgrade\n1935732 - Update Jenkins agent maven directory to be version agnostic [ART ocp build data]\n1935814 - Pod and Node lists eventually have incorrect row heights when additional columns have long text\n1935909 - New CSV using ServiceAccount named \"default\" stuck in Pending during upgrade\n1936022 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s terminationGracePeriod and service\u0027s clusterIPs\n1936030 - Ingress operator performs spurious updates in response to API\u0027s defaulting of NodePort service\u0027s clusterIPs field\n1936223 - The IPI installer has a typo. It is missing the word \"the\" in \"the Engine\". \n1936336 - Updating multus-cni builder \u0026 base images to be consistent with ART 4.8 (closed)\n1936342 - kuryr-controller restarting after 3 days cluster running - pools without members\n1936443 - Hive based OCP IPI baremetal installation fails to connect to API VIP port 22623\n1936488 - [sig-instrumentation][Late] Alerts shouldn\u0027t report any alerts in firing state apart from Watchdog and AlertmanagerReceiversNotConfigured: Prometheus query error\n1936515 - sdn-controller is missing some health checks\n1936534 - When creating a worker with a used mac-address stuck on registering\n1936585 - configure alerts if the catalogsources are missing\n1936620 - OLM checkbox descriptor renders switch instead of checkbox\n1936721 - network-metrics-deamon not associated with a priorityClassName\n1936771 - [aws ebs csi driver] The event for Pod consuming a readonly PVC is not clear\n1936785 - Configmap gatherer doesn\u0027t include namespace name (in the archive path) in case of a configmap with binary data\n1936788 - RBD RWX PVC creation with Filesystem volume mode selection is creating RWX PVC with Block volume mode instead of disabling Filesystem volume mode selection\n1936798 - Authentication log gatherer shouldn\u0027t scan all the pod logs in the openshift-authentication namespace\n1936801 - Support ServiceBinding 0.5.0+\n1936854 - Incorrect imagestream is shown as selected in knative service container image edit flow\n1936857 - e2e-ovirt-ipi-install-install is permafailing on 4.5 nightlies\n1936859 - ovirt 4.4 -\u003e 4.5 upgrade jobs are permafailing\n1936867 - Periodic vsphere IPI install is broken - missing pip\n1936871 - [Cinder CSI] Topology aware provisioning doesn\u0027t work when Nova and Cinder AZs are different\n1936904 - Wrong output YAML when syncing groups without --confirm\n1936983 - Topology view - vm details screen isntt stop loading\n1937005 - when kuryr quotas are unlimited, we should not sent alerts\n1937018 - FilterToolbar component does not handle \u0027null\u0027 value for \u0027rowFilters\u0027 prop\n1937020 - Release new from image stream chooses incorrect ID based on status\n1937077 - Blank White page on Topology\n1937102 - Pod Containers Page Not Translated\n1937122 - CAPBM changes to support flexible reboot modes\n1937145 - [Local storage] PV provisioned by localvolumeset stays in \"Released\" status after the pod/pvc deleted\n1937167 - [sig-arch] Managed cluster should have no crashlooping pods in core namespaces over four minutes\n1937244 - [Local Storage] The model name of aws EBS doesn\u0027t be extracted well\n1937299 - pod.spec.volumes.awsElasticBlockStore.partition is not respected on NVMe volumes\n1937452 - cluster-network-operator CI linting fails in master branch\n1937459 - Wrong Subnet retrieved for Service without Selector\n1937460 - [CI] Network quota pre-flight checks are failing the installation\n1937464 - openstack cloud credentials are not getting configured with correct user_domain_name across the cluster\n1937466 - KubeClientCertificateExpiration alert is confusing, without explanation in the documentation\n1937496 - Metrics viewer in OCP Console is missing date in a timestamp for selected datapoint\n1937535 - Not all image pulls within OpenShift builds retry\n1937594 - multiple pods in ContainerCreating state after migration from OpenshiftSDN to OVNKubernetes\n1937627 - Bump DEFAULT_DOC_URL for 4.8\n1937628 - Bump upgrade channels for 4.8\n1937658 - Description for storage class encryption during storagecluster creation needs to be updated\n1937666 - Mouseover on headline\n1937683 - Wrong icon classification of output in buildConfig when the destination is a DockerImage\n1937693 - ironic image \"/\" cluttered with files\n1937694 - [oVirt] split ovirt providerIDReconciler logic into NodeController and ProviderIDController\n1937717 - If browser default font size is 20, the layout of template screen breaks\n1937722 - OCP 4.8 vuln due to BZ 1936445\n1937929 - Operand page shows a 404:Not Found error for OpenShift GitOps Operator\n1937941 - [RFE]fix wording for favorite templates\n1937972 - Router HAProxy config file template is slow to render due to repetitive regex compilations\n1938131 - [AWS] Missing iam:ListAttachedRolePolicies permission in permissions.go\n1938321 - Cannot view PackageManifest objects in YAML on \u0027Home \u003e Search\u0027 page nor \u0027CatalogSource details \u003e Operators tab\u0027\n1938465 - thanos-querier should set a CPU request on the thanos-query container\n1938466 - packageserver deployment sets neither CPU or memory request on the packageserver container\n1938467 - The default cluster-autoscaler should get default cpu and memory requests if user omits them\n1938468 - kube-scheduler-operator has a container without a CPU request\n1938492 - Marketplace extract container does not request CPU or memory\n1938493 - machine-api-operator declares restrictive cpu and memory limits where it should not\n1938636 - Can\u0027t set the loglevel of the container: cluster-policy-controller and kube-controller-manager-recovery-controller\n1938903 - Time range on dashboard page will be empty after drog and drop mouse in the graph\n1938920 - ovnkube-master/ovs-node DaemonSets should use maxUnavailable: 10%\n1938947 - Update blocked from 4.6 to 4.7 when using spot/preemptible instances\n1938949 - [VPA] Updater failed to trigger evictions due to \"vpa-admission-controller\" not found\n1939054 - machine healthcheck kills aws spot instance before generated\n1939060 - CNO: nodes and masters are upgrading simultaneously\n1939069 - Add source to vm template silently failed when no storage class is defined in the cluster\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1939168 - Builds failing for OCP 3.11 since PR#25 was merged\n1939226 - kube-apiserver readiness probe appears to be hitting /healthz, not /readyz\n1939227 - kube-apiserver liveness probe appears to be hitting /healthz, not /livez\n1939232 - CI tests using openshift/hello-world broken by Ruby Version Update\n1939270 - fix co upgradeableFalse status and reason\n1939294 - OLM may not delete pods with grace period zero (force delete)\n1939412 - missed labels for thanos-ruler pods\n1939485 - CVE-2021-20291 containers/storage: DoS via malicious image\n1939547 - Include container=\"POD\" in resource queries\n1939555 - VSphereProblemDetectorControllerDegraded: context canceled during upgrade to 4.8.0\n1939573 - after entering valid git repo url on add flow page, throwing warning message instead Validated\n1939580 - Authentication operator is degraded during 4.8 to 4.8 upgrade and normal 4.8 e2e runs\n1939606 - Attempting to put a host into maintenance mode warns about Ceph cluster health, but no storage cluster problems are apparent\n1939661 - support new AWS region ap-northeast-3\n1939726 - clusteroperator/network should not change condition/Degraded during normal serial test execution\n1939731 - Image registry operator reports unavailable during normal serial run\n1939734 - Node Fanout Causes Excessive WATCH Secret Calls, Taking Down Clusters\n1939740 - dual stack nodes with OVN single ipv6 fails on bootstrap phase\n1939752 - ovnkube-master sbdb container does not set requests on cpu or memory\n1939753 - Delete HCO is stucking if there is still VM in the cluster\n1939815 - Change the Warning Alert for Encrypted PVs in Create StorageClass(provisioner:RBD) page\n1939853 - [DOC] Creating manifests API should not allow folder in the \"file_name\"\n1939865 - GCP PD CSI driver does not have CSIDriver instance\n1939869 - [e2e][automation] Add annotations to datavolume for HPP\n1939873 - Unlimited number of characters accepted for base domain name\n1939943 - `cluster-kube-apiserver-operator check-endpoints` observed a panic: runtime error: invalid memory address or nil pointer dereference\n1940030 - cluster-resource-override: fix spelling mistake for run-level match expression in webhook configuration\n1940057 - Openshift builds should use a wach instead of polling when checking for pod status\n1940142 - 4.6-\u003e4.7 updates stick on OpenStackCinderCSIDriverOperatorCR_OpenStackCinderDriverControllerServiceController_Deploying\n1940159 - [OSP] cluster destruction fails to remove router in BYON (with provider network) with Kuryr as primary network\n1940206 - Selector and VolumeTableRows not i18ned\n1940207 - 4.7-\u003e4.6 rollbacks stuck on prometheusrules admission webhook \"no route to host\"\n1940314 - Failed to get type for Dashboard Kubernetes / Compute Resources / Namespace (Workloads)\n1940318 - No data under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027\n1940322 - Split of dashbard is wrong, many Network parts\n1940337 - rhos-ipi installer fails with not clear message when openstack tenant doesn\u0027t have flavors needed for compute machines\n1940361 - [e2e][automation] Fix vm action tests with storageclass HPP\n1940432 - Gather datahubs.installers.datahub.sap.com resources from SAP clusters\n1940488 - After fix for CVE-2021-3344, Builds do not mount node entitlement keys\n1940498 - pods may fail to add logical port due to lr-nat-del/lr-nat-add error messages\n1940499 - hybrid-overlay not logging properly before exiting due to an error\n1940518 - Components in bare metal components lack resource requests\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1940704 - prjquota is dropped from rootflags if rootfs is reprovisioned\n1940755 - [Web-console][Local Storage] LocalVolumeSet could not be created from web-console without detail error info\n1940865 - Add BareMetalPlatformType into e2e upgrade service unsupported list\n1940876 - Components in ovirt components lack resource requests\n1940889 - Installation failures in OpenStack release jobs\n1940933 - [sig-arch] Check if alerts are firing during or after upgrade success: AggregatedAPIDown on v1beta1.metrics.k8s.io\n1940939 - Wrong Openshift node IP as kubelet setting VIP as node IP\n1940940 - csi-snapshot-controller goes unavailable when machines are added removed to cluster\n1940950 - vsphere: client/bootstrap CSR double create\n1940972 - vsphere: [4.6] CSR approval delayed for unknown reason\n1941000 - cinder storageclass creates persistent volumes with wrong label failure-domain.beta.kubernetes.io/zone in multi availability zones architecture on OSP 16. \n1941334 - [RFE] Cluster-api-provider-ovirt should handle auto pinning policy\n1941342 - Add `kata-osbuilder-generate.service` as part of the default presets\n1941456 - Multiple pods stuck in ContainerCreating status with the message \"failed to create container for [kubepods burstable podxxx] : dbus: connection closed by user\" being seen in the journal log\n1941526 - controller-manager-operator: Observed a panic: nil pointer dereference\n1941592 - HAProxyDown not Firing\n1941606 - [assisted operator] Assisted Installer Operator CSV related images should be digests for icsp\n1941625 - Developer -\u003e Topology - i18n misses\n1941635 - Developer -\u003e Monitoring - i18n misses\n1941636 - BM worker nodes deployment with virtual media failed while trying to clean raid\n1941645 - Developer -\u003e Builds - i18n misses\n1941655 - Developer -\u003e Pipelines - i18n misses\n1941667 - Developer -\u003e Project - i18n misses\n1941669 - Developer -\u003e ConfigMaps - i18n misses\n1941759 - Errored pre-flight checks should not prevent install\n1941798 - Some details pages don\u0027t have internationalized ResourceKind labels\n1941801 - Many filter toolbar dropdowns haven\u0027t been internationalized\n1941815 - From the web console the terminal can no longer connect after using leaving and returning to the terminal view\n1941859 - [assisted operator] assisted pod deploy first time in error state\n1941901 - Toleration merge logic does not account for multiple entries with the same key\n1941915 - No validation against template name in boot source customization\n1941936 - when setting parameters in containerRuntimeConfig, it will show incorrect information on its description\n1941980 - cluster-kube-descheduler operator is broken when upgraded from 4.7 to 4.8\n1941990 - Pipeline metrics endpoint changed in osp-1.4\n1941995 - fix backwards incompatible trigger api changes in osp1.4\n1942086 - Administrator -\u003e Home - i18n misses\n1942117 - Administrator -\u003e Workloads - i18n misses\n1942125 - Administrator -\u003e Serverless - i18n misses\n1942193 - Operand creation form - broken/cutoff blue line on the Accordion component (fieldGroup)\n1942207 - [vsphere] hostname are changed when upgrading from 4.6 to 4.7.x causing upgrades to fail\n1942271 - Insights operator doesn\u0027t gather pod information from openshift-cluster-version\n1942375 - CRI-O failing with error \"reserving ctr name\"\n1942395 - The status is always \"Updating\" on dc detail page after deployment has failed. \n1942521 - [Assisted-4.7] [Staging][OCS] Minimum memory for selected role is failing although minimum OCP requirement satisfied\n1942522 - Resolution fails to sort channel if inner entry does not satisfy predicate\n1942536 - Corrupted image preventing containers from starting\n1942548 - Administrator -\u003e Networking - i18n misses\n1942553 - CVE-2021-22133 go.elastic.co/apm: leaks sensitive HTTP headers during panic\n1942555 - Network policies in ovn-kubernetes don\u0027t support external traffic from router when the endpoint publishing strategy is HostNetwork\n1942557 - Query is reporting \"no datapoint\" when label cluster=\"\" is set but work when the label is removed or when running directly in Prometheus\n1942608 - crictl cannot list the images with an error: error locating item named \"manifest\" for image with ID\n1942614 - Administrator -\u003e Storage - i18n misses\n1942641 - Administrator -\u003e Builds - i18n misses\n1942673 - Administrator -\u003e Pipelines - i18n misses\n1942694 - Resource names with a colon do not display property in the browser window title\n1942715 - Administrator -\u003e User Management - i18n misses\n1942716 - Quay Container Security operator has Medium \u003c-\u003e Low colors reversed\n1942725 - [SCC] openshift-apiserver degraded when creating new pod after installing Stackrox which creates a less privileged SCC [4.8]\n1942736 - Administrator -\u003e Administration - i18n misses\n1942749 - Install Operator form should use info icon for popovers\n1942837 - [OCPv4.6] unable to deploy pod with unsafe sysctls\n1942839 - Windows VMs fail to start on air-gapped environments\n1942856 - Unable to assign nodes for EgressIP even if the egress-assignable label is set\n1942858 - [RFE]Confusing detach volume UX\n1942883 - AWS EBS CSI driver does not support partitions\n1942894 - IPA error when provisioning masters due to an error from ironic.conductor - /dev/sda is busy\n1942935 - must-gather improvements\n1943145 - vsphere: client/bootstrap CSR double create\n1943175 - unable to install IPI PRIVATE OpenShift cluster in Azure due to organization policies (set azure storage account TLS version default to 1.2)\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1943219 - unable to install IPI PRIVATE OpenShift cluster in Azure - SSH access from the Internet should be blocked\n1943224 - cannot upgrade openshift-kube-descheduler from 4.7.2 to latest\n1943238 - The conditions table does not occupy 100% of the width. \n1943258 - [Assisted-4.7][Staging][Advanced Networking] Cluster install fails while waiting for control plane\n1943314 - [OVN SCALE] Combine Logical Flows inside Southbound DB. \n1943315 - avoid workload disruption for ICSP changes\n1943320 - Baremetal node loses connectivity with bonded interface and OVNKubernetes\n1943329 - TLSSecurityProfile missing from KubeletConfig CRD Manifest\n1943356 - Dynamic plugins surfaced in the UI should be referred to as \"Console plugins\"\n1943539 - crio-wipe is failing to start \"Failed to shutdown storage before wiping: A layer is mounted: layer is in use by a container\"\n1943543 - DeploymentConfig Rollback doesn\u0027t reset params correctly\n1943558 - [assisted operator] Assisted Service pod unable to reach self signed local registry in disco environement\n1943578 - CoreDNS caches NXDOMAIN responses for up to 900 seconds\n1943614 - add bracket logging on openshift/builder calls into buildah to assist test-platform team triage\n1943637 - upgrade from ocp 4.5 to 4.6 does not clear SNAT rules on ovn\n1943649 - don\u0027t use hello-openshift for network-check-target\n1943667 - KubeDaemonSetRolloutStuck fires during upgrades too often because it does not accurately detect progress\n1943719 - storage-operator/vsphere-problem-detector causing upgrades to fail that would have succeeded in past versions\n1943804 - API server on AWS takes disruption between 70s and 110s after pod begins termination via external LB\n1943845 - Router pods should have startup probes configured\n1944121 - OVN-kubernetes references AddressSets after deleting them, causing ovn-controller errors\n1944160 - CNO: nbctl daemon should log reconnection info\n1944180 - OVN-Kube Master does not release election lock on shutdown\n1944246 - Ironic fails to inspect and move node to \"manageable\u0027 but get bmh remains in \"inspecting\"\n1944268 - openshift-install AWS SDK is missing endpoints for the ap-northeast-3 region\n1944509 - Translatable texts without context in ssh expose component\n1944581 - oc project not works with cluster proxy\n1944587 - VPA could not take actions based on the recommendation when min-replicas=1\n1944590 - The field name \"VolumeSnapshotContent\" is wrong on VolumeSnapshotContent detail page\n1944602 - Consistant fallures of features/project-creation.feature Cypress test in CI\n1944631 - openshif authenticator should not accept non-hashed tokens\n1944655 - [manila-csi-driver-operator] openstack-manila-csi-nodeplugin pods stucked with \".. still connecting to unix:///var/lib/kubelet/plugins/csi-nfsplugin/csi.sock\"\n1944660 - dm-multipath race condition on bare metal causing /boot partition mount failures\n1944674 - Project field become to \"All projects\" and disabled in \"Review and create virtual machine\" step in devconsole\n1944678 - Whereabouts IPAM CNI duplicate IP addresses assigned to pods\n1944761 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944762 - Drain on worker node during an upgrade fails due to PDB set for image registry pod when only a single replica is present\n1944763 - field level help instances do not use common util component \u003cFieldLevelHelp\u003e\n1944853 - Update to nodejs \u003e=14.15.4 for ARM\n1944974 - Duplicate KubeControllerManagerDown/KubeSchedulerDown alerts\n1944986 - Clarify the ContainerRuntimeConfiguration cr description on the validation\n1945027 - Button \u0027Copy SSH Command\u0027 does not work\n1945085 - Bring back API data in etcd test\n1945091 - In k8s 1.21 bump Feature:IPv6DualStack tests are disabled\n1945103 - \u0027User credentials\u0027 shows even the VM is not running\n1945104 - In k8s 1.21 bump \u0027[sig-storage] [cis-hostpath] [Testpattern: Generic Ephemeral-volume\u0027 tests are disabled\n1945146 - Remove pipeline Tech preview badge for pipelines GA operator\n1945236 - Bootstrap ignition shim doesn\u0027t follow proxy settings\n1945261 - Operator dependency not consistently chosen from default channel\n1945312 - project deletion does not reset UI project context\n1945326 - console-operator: does not check route health periodically\n1945387 - Image Registry deployment should have 2 replicas and hard anti-affinity rules\n1945398 - 4.8 CI failure: [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n1945431 - alerts: SystemMemoryExceedsReservation triggers too quickly\n1945443 - operator-lifecycle-manager-packageserver flaps Available=False with no reason or message\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1945548 - catalog resource update failed if spec.secrets set to \"\"\n1945584 - Elasticsearch operator fails to install on 4.8 cluster on ppc64le/s390x\n1945599 - Optionally set KERNEL_VERSION and RT_KERNEL_VERSION\n1945630 - Pod log filename no longer in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1945637 - QE- Automation- Fixing smoke test suite for pipeline-plugin\n1945646 - gcp-routes.sh running as initrc_t unnecessarily\n1945659 - [oVirt] remove ovirt_cafile from ovirt-credentials secret\n1945677 - Need ACM Managed Cluster Info metric enabled for OCP monitoring telemetry\n1945687 - Dockerfile needs updating to new container CI registry\n1945700 - Syncing boot mode after changing device should be restricted to Supermicro\n1945816 - \" Ingresses \" should be kept in English for Chinese\n1945818 - Chinese translation issues: Operator should be the same with English `Operators`\n1945849 - Unnecessary series churn when a new version of kube-state-metrics is rolled out\n1945910 - [aws] support byo iam roles for instances\n1945948 - SNO: pods can\u0027t reach ingress when the ingress uses a different IPv6. \n1946079 - Virtual master is not getting an IP address\n1946097 - [oVirt] oVirt credentials secret contains unnecessary \"ovirt_cafile\"\n1946119 - panic parsing install-config\n1946243 - No relevant error when pg limit is reached in block pools page\n1946307 - [CI] [UPI] use a standardized and reliable way to install google cloud SDK in UPI image\n1946320 - Incorrect error message in Deployment Attach Storage Page\n1946449 - [e2e][automation] Fix cloud-init tests as UI changed\n1946458 - Edit Application action overwrites Deployment envFrom values on save\n1946459 - In bare metal IPv6 environment, [sig-storage] [Driver: nfs] tests are failing in CI. \n1946479 - In k8s 1.21 bump BoundServiceAccountTokenVolume is disabled by default\n1946497 - local-storage-diskmaker pod logs \"DeviceSymlinkExists\" and \"not symlinking, could not get lock: \u003cnil\u003e\"\n1946506 - [on-prem] mDNS plugin no longer needed\n1946513 - honor use specified system reserved with auto node sizing\n1946540 - auth operator: only configure webhook authenticators for internal auth when oauth-apiserver pods are ready\n1946584 - Machine-config controller fails to generate MC, when machine config pool with dashes in name presents under the cluster\n1946607 - etcd readinessProbe is not reflective of actual readiness\n1946705 - Fix issues with \"search\" capability in the Topology Quick Add component\n1946751 - DAY2 Confusing event when trying to add hosts to a cluster that completed installation\n1946788 - Serial tests are broken because of router\n1946790 - Marketplace operator flakes Available=False OperatorStarting during updates\n1946838 - Copied CSVs show up as adopted components\n1946839 - [Azure] While mirroring images to private registry throwing error: invalid character \u0027\u003c\u0027 looking for beginning of value\n1946865 - no \"namespace:kube_pod_container_resource_requests_cpu_cores:sum\" and \"namespace:kube_pod_container_resource_requests_memory_bytes:sum\" metrics\n1946893 - the error messages are inconsistent in DNS status conditions if the default service IP is taken\n1946922 - Ingress details page doesn\u0027t show referenced secret name and link\n1946929 - the default dns operator\u0027s Progressing status is always True and cluster operator dns Progressing status is False\n1947036 - \"failed to create Matchbox client or connect\" on e2e-metal jobs or metal clusters via cluster-bot\n1947066 - machine-config-operator pod crashes when noProxy is *\n1947067 - [Installer] Pick up upstream fix for installer console output\n1947078 - Incorrect skipped status for conditional tasks in the pipeline run\n1947080 - SNO IPv6 with \u0027temporary 60-day domain\u0027 option fails with IPv4 exception\n1947154 - [master] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1947164 - Print \"Successfully pushed\" even if the build push fails. \n1947176 - OVN-Kubernetes leaves stale AddressSets around if the deletion was missed. \n1947293 - IPv6 provision addresses range larger then /64 prefix (e.g. /48)\n1947311 - When adding a new node to localvolumediscovery UI does not show pre-existing node name\u0027s\n1947360 - [vSphere csi driver operator] operator pod runs as \u201cBestEffort\u201d qosClass\n1947371 - [vSphere csi driver operator] operator doesn\u0027t create \u201ccsidriver\u201d instance\n1947402 - Single Node cluster upgrade: AWS EBS CSI driver deployment is stuck on rollout\n1947478 - discovery v1 beta1 EndpointSlice is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947490 - If Clevis on a managed LUKs volume with Ignition enables, the system will fails to automatically open the LUKs volume on system boot\n1947498 - policy v1 beta1 PodDisruptionBudget is deprecated in Kubernetes 1.21 (OCP 4.8)\n1947663 - disk details are not synced in web-console\n1947665 - Internationalization values for ceph-storage-plugin should be in file named after plugin\n1947684 - MCO on SNO sometimes has rendered configs and sometimes does not\n1947712 - [OVN] Many faults and Polling interval stuck for 4 seconds every roughly 5 minutes intervals. \n1947719 - 8 APIRemovedInNextReleaseInUse info alerts display\n1947746 - Show wrong kubernetes version from kube-scheduler/kube-controller-manager operator pods\n1947756 - [azure-disk-csi-driver-operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947767 - [azure-disk-csi-driver-operator] Uses the same storage type in the sc created by it as the default sc?\n1947771 - [kube-descheduler]descheduler operator pod should not run as \u201cBestEffort\u201d qosClass\n1947774 - CSI driver operators use \"Always\" imagePullPolicy in some containers\n1947775 - [vSphere csi driver operator] doesn\u2019t use the downstream images from payload. \n1947776 - [vSphere csi driver operator] Should allow more nodes to be updated simultaneously for speeding up cluster upgrade\n1947779 - [LSO] Should allow more nodes to be updated simultaneously for speeding up LSO upgrade\n1947785 - Cloud Compute: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947789 - Console: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947791 - MCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947793 - DevEx: APIRemovedInNextReleaseInUse info alerts display\n1947794 - OLM: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1947795 - Networking: APIRemovedInNextReleaseInUse info alerts display\n1947797 - CVO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947798 - Images: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947800 - Ingress: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1947801 - Kube Storage Version Migrator APIRemovedInNextReleaseInUse info alerts display\n1947803 - Openshift Apiserver: APIRemovedInNextReleaseInUse info alerts display\n1947806 - Re-enable h2spec, http/2 and grpc-interop e2e tests in openshift/origin\n1947828 - `download it` link should save pod log in \u003cpod-name\u003e-\u003ccontainer-name\u003e.log format\n1947866 - disk.csi.azure.com.spec.operatorLogLevel is not updated when CSO loglevel is changed\n1947917 - Egress Firewall does not reliably apply firewall rules\n1947946 - Operator upgrades can delete existing CSV before completion\n1948011 - openshift-controller-manager constantly reporting type \"Upgradeable\" status Unknown\n1948012 - service-ca constantly reporting type \"Upgradeable\" status Unknown\n1948019 - [4.8] Large number of requests to the infrastructure cinder volume service\n1948022 - Some on-prem namespaces missing from must-gather\n1948040 - cluster-etcd-operator: etcd is using deprecated logger\n1948082 - Monitoring should not set Available=False with no reason on updates\n1948137 - CNI DEL not called on node reboot - OCP 4 CRI-O. \n1948232 - DNS operator performs spurious updates in response to API\u0027s defaulting of daemonset\u0027s maxSurge and service\u0027s ipFamilies and ipFamilyPolicy fields\n1948311 - Some jobs failing due to excessive watches: the server has received too many requests and has asked us to try again later\n1948359 - [aws] shared tag was not removed from user provided IAM role\n1948410 - [LSO] Local Storage Operator uses imagePullPolicy as \"Always\"\n1948415 - [vSphere csi driver operator] clustercsidriver.spec.logLevel doesn\u0027t take effective after changing\n1948427 - No action is triggered after click \u0027Continue\u0027 button on \u0027Show community Operator\u0027 windows\n1948431 - TechPreviewNoUpgrade does not enable CSI migration\n1948436 - The outbound traffic was broken intermittently after shutdown one egressIP node\n1948443 - OCP 4.8 nightly still showing v1.20 even after 1.21 merge\n1948471 - [sig-auth][Feature:OpenShiftAuthorization][Serial] authorization TestAuthorizationResourceAccessReview should succeed [Suite:openshift/conformance/serial]\n1948505 - [vSphere csi driver operator] vmware-vsphere-csi-driver-operator pod restart every 10 minutes\n1948513 - get-resources.sh doesn\u0027t honor the no_proxy settings\n1948524 - \u0027DeploymentUpdated\u0027 Updated Deployment.apps/downloads -n openshift-console because it changed message is printed every minute\n1948546 - VM of worker is in error state when a network has port_security_enabled=False\n1948553 - When setting etcd spec.LogLevel is not propagated to etcd operand\n1948555 - A lot of events \"rpc error: code = DeadlineExceeded desc = context deadline exceeded\" were seen in azure disk csi driver verification test\n1948563 - End-to-End Secure boot deployment fails \"Invalid value for input variable\"\n1948582 - Need ability to specify local gateway mode in CNO config\n1948585 - Need a CI jobs to test local gateway mode with bare metal\n1948592 - [Cluster Network Operator] Missing Egress Router Controller\n1948606 - DNS e2e test fails \"[sig-arch] Only known images used by tests\" because it does not use a known image\n1948610 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node [LinuxOnly]\n1948626 - TestRouteAdmissionPolicy e2e test is failing often\n1948628 - ccoctl needs to plan for future (non-AWS) platform support in the CLI\n1948634 - upgrades: allow upgrades without version change\n1948640 - [Descheduler] operator log reports key failed with : kubedeschedulers.operator.openshift.io \"cluster\" not found\n1948701 - unneeded CCO alert already covered by CVO\n1948703 - p\u0026f: probes should not get 429s\n1948705 - [assisted operator] SNO deployment fails - ClusterDeployment shows `bootstrap.ign was not found`\n1948706 - Cluster Autoscaler Operator manifests missing annotation for ibm-cloud-managed profile\n1948708 - cluster-dns-operator includes a deployment with node selector of masters for the IBM cloud managed profile\n1948711 - thanos querier and prometheus-adapter should have 2 replicas\n1948714 - cluster-image-registry-operator targets master nodes in ibm-cloud-managed-profile\n1948716 - cluster-ingress-operator deployment targets master nodes for ibm-cloud-managed profile\n1948718 - cluster-network-operator deployment manifest for ibm-cloud-managed profile contains master node selector\n1948719 - Machine API components should use 1.21 dependencies\n1948721 - cluster-storage-operator deployment targets master nodes for ibm-cloud-managed profile\n1948725 - operator lifecycle manager does not include profile annotations for ibm-cloud-managed\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1948771 - ~50% of GCP upgrade jobs in 4.8 failing with \"AggregatedAPIDown\" alert on packages.coreos.com\n1948782 - Stale references to the single-node-production-edge cluster profile\n1948787 - secret.StringData shouldn\u0027t be used for reads\n1948788 - Clicking an empty metrics graph (when there is no data) should still open metrics viewer\n1948789 - Clicking on a metrics graph should show request and limits queries as well on the resulting metrics page\n1948919 - Need minor update in message on channel modal\n1948923 - [aws] installer forces the platform.aws.amiID option to be set, while installing a cluster into GovCloud or C2S region\n1948926 - Memory Usage of Dashboard \u0027Kubernetes / Compute Resources / Pod\u0027 contain wrong CPU query\n1948936 - [e2e][automation][prow] Prow script point to deleted resource\n1948943 - (release-4.8) Limit the number of collected pods in the workloads gatherer\n1948953 - Uninitialized cloud provider error when provisioning a cinder volume\n1948963 - [RFE] Cluster-api-provider-ovirt should handle hugepages\n1948966 - Add the ability to run a gather done by IO via a Kubernetes Job\n1948981 - Align dependencies and libraries with latest ironic code\n1948998 - style fixes by GoLand and golangci-lint\n1948999 - Can not assign multiple EgressIPs to a namespace by using automatic way. \n1949019 - PersistentVolumes page cannot sync project status automatically which will block user to create PV\n1949022 - Openshift 4 has a zombie problem\n1949039 - Wrong env name to get podnetinfo for hugepage in app-netutil\n1949041 - vsphere: wrong image names in bundle\n1949042 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the http2 tests (on OpenStack)\n1949050 - Bump k8s to latest 1.21\n1949061 - [assisted operator][nmstate] Continuous attempts to reconcile InstallEnv in the case of invalid NMStateConfig\n1949063 - [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service\n1949075 - Extend openshift/api for Add card customization\n1949093 - PatternFly v4.96.2 regression results in a.pf-c-button hover issues\n1949096 - Restore private git clone tests\n1949099 - network-check-target code cleanup\n1949105 - NetworkPolicy ... should enforce ingress policy allowing any port traffic to a server on a specific protocol\n1949145 - Move openshift-user-critical priority class to CCO\n1949155 - Console doesn\u0027t correctly check for favorited or last namespace on load if project picker used\n1949180 - Pipelines plugin model kinds aren\u0027t picked up by parser\n1949202 - sriov-network-operator not available from operatorhub on ppc64le\n1949218 - ccoctl not included in container image\n1949237 - Bump OVN: Lots of conjunction warnings in ovn-controller container logs\n1949277 - operator-marketplace: deployment manifests for ibm-cloud-managed profile have master node selectors\n1949294 - [assisted operator] OPENSHIFT_VERSIONS in assisted operator subscription does not propagate\n1949306 - need a way to see top API accessors\n1949313 - Rename vmware-vsphere-* images to vsphere-* images before 4.8 ships\n1949316 - BaremetalHost resource automatedCleaningMode ignored due to outdated vendoring\n1949347 - apiserver-watcher support for dual-stack\n1949357 - manila-csi-controller pod not running due to secret lack(in another ns)\n1949361 - CoreDNS resolution failure for external hostnames with \"A: dns: overflow unpacking uint16\"\n1949364 - Mention scheduling profiles in scheduler operator repository\n1949370 - Testability of: Static pod installer controller deadlocks with non-existing installer pod, WAS: kube-apisrever of clsuter operator always with incorrect status due to pleg error\n1949384 - Edit Default Pull Secret modal - i18n misses\n1949387 - Fix the typo in auto node sizing script\n1949404 - label selector on pvc creation page - i18n misses\n1949410 - The referred role doesn\u0027t exist if create rolebinding from rolebinding tab of role page\n1949411 - VolumeSnapshot, VolumeSnapshotClass and VolumeSnapshotConent Details tab is not translated - i18n misses\n1949413 - Automatic boot order setting is done incorrectly when using by-path style device names\n1949418 - Controller factory workers should always restart on panic()\n1949419 - oauth-apiserver logs \"[SHOULD NOT HAPPEN] failed to update managedFields for authentication.k8s.io/v1, Kind=TokenReview: failed to convert new object (authentication.k8s.io/v1, Kind=TokenReview)\"\n1949420 - [azure csi driver operator] pvc.status.capacity and pv.spec.capacity are processed not the same as in-tree plugin\n1949435 - ingressclass controller doesn\u0027t recreate the openshift-default ingressclass after deleting it\n1949480 - Listeners timeout are constantly being updated\n1949481 - cluster-samples-operator restarts approximately two times per day and logs too many same messages\n1949509 - Kuryr should manage API LB instead of CNO\n1949514 - URL is not visible for routes at narrow screen widths\n1949554 - Metrics of vSphere CSI driver sidecars are not collected\n1949582 - OCP v4.7 installation with OVN-Kubernetes fails with error \"egress bandwidth restriction -1 is not equals\"\n1949589 - APIRemovedInNextEUSReleaseInUse Alert Missing\n1949591 - Alert does not catch removed api usage during end-to-end tests. \n1949593 - rename DeprecatedAPIInUse alert to APIRemovedInNextReleaseInUse\n1949612 - Install with 1.21 Kubelet is spamming logs with failed to get stats failed command \u0027du\u0027\n1949626 - machine-api fails to create AWS client in new regions\n1949661 - Kubelet Workloads Management changes for OCPNODE-529\n1949664 - Spurious keepalived liveness probe failures\n1949671 - System services such as openvswitch are stopped before pod containers on system shutdown or reboot\n1949677 - multus is the first pod on a new node and the last to go ready\n1949711 - cvo unable to reconcile deletion of openshift-monitoring namespace\n1949721 - Pick 99237: Use the audit ID of a request for better correlation\n1949741 - Bump golang version of cluster-machine-approver\n1949799 - ingresscontroller should deny the setting when spec.tuningOptions.threadCount exceed 64\n1949810 - OKD 4.7 unable to access Project Topology View\n1949818 - Add e2e test to perform MCO operation Single Node OpenShift\n1949820 - Unable to use `oc adm top is` shortcut when asking for `imagestreams`\n1949862 - The ccoctl tool hits the panic sometime when running the delete subcommand\n1949866 - The ccoctl fails to create authentication file when running the command `ccoctl aws create-identity-provider` with `--output-dir` parameter\n1949880 - adding providerParameters.gcp.clientAccess to existing ingresscontroller doesn\u0027t work\n1949882 - service-idler build error\n1949898 - Backport RP#848 to OCP 4.8\n1949907 - Gather summary of PodNetworkConnectivityChecks\n1949923 - some defined rootVolumes zones not used on installation\n1949928 - Samples Operator updates break CI tests\n1949935 - Fix incorrect access review check on start pipeline kebab action\n1949956 - kaso: add minreadyseconds to ensure we don\u0027t have an LB outage on kas\n1949967 - Update Kube dependencies in MCO to 1.21\n1949972 - Descheduler metrics: populate build info data and make the metrics entries more readeable\n1949978 - [sig-network-edge][Conformance][Area:Networking][Feature:Router] The HAProxy router should pass the h2spec conformance tests [Suite:openshift/conformance/parallel/minimal]\n1949990 - (release-4.8) Extend the OLM operator gatherer to include CSV display name\n1949991 - openshift-marketplace pods are crashlooping\n1950007 - [CI] [UPI] easy_install is not reliable enough to be used in an image\n1950026 - [Descheduler] Need better way to handle evicted pod count for removeDuplicate pod strategy\n1950047 - CSV deployment template custom annotations are not propagated to deployments\n1950112 - SNO: machine-config pool is degraded: error running chcon -R -t var_run_t /run/mco-machine-os-content/os-content-321709791\n1950113 - in-cluster operators need an API for additional AWS tags\n1950133 - MCO creates empty conditions on the kubeletconfig object\n1950159 - Downstream ovn-kubernetes repo should have no linter errors\n1950175 - Update Jenkins and agent base image to Go 1.16\n1950196 - ssh Key is added even with \u0027Expose SSH access to this virtual machine\u0027 unchecked\n1950210 - VPA CRDs use deprecated API version\n1950219 - KnativeServing is not shown in list on global config page\n1950232 - [Descheduler] - The minKubeVersion should be 1.21\n1950236 - Update OKD imagestreams to prefer centos7 images\n1950270 - should use \"kubernetes.io/os\" in the dns/ingresscontroller node selector description when executing oc explain command\n1950284 - Tracking bug for NE-563 - support user-defined tags on AWS load balancers\n1950341 - NetworkPolicy: allow-from-router policy does not allow access to service when the endpoint publishing strategy is HostNetwork on OpenshiftSDN network\n1950379 - oauth-server is in pending/crashbackoff at beginning 50% of CI runs\n1950384 - [sig-builds][Feature:Builds][sig-devex][Feature:Jenkins][Slow] openshift pipeline build perm failing\n1950409 - Descheduler operator code and docs still reference v1beta1\n1950417 - The Marketplace Operator is building with EOL k8s versions\n1950430 - CVO serves metrics over HTTP, despite a lack of consumers\n1950460 - RFE: Change Request Size Input to Number Spinner Input\n1950471 - e2e-metal-ipi-ovn-dualstack is failing with etcd unable to bootstrap\n1950532 - Include \"update\" when referring to operator approval and channel\n1950543 - Document non-HA behaviors in the MCO (SingleNodeOpenshift)\n1950590 - CNO: Too many OVN netFlows collectors causes ovnkube pods CrashLoopBackOff\n1950653 - BuildConfig ignores Args\n1950761 - Monitoring operator deployments anti-affinity rules prevent their rollout on single-node\n1950908 - kube_pod_labels metric does not contain k8s labels\n1950912 - [e2e][automation] add devconsole tests\n1950916 - [RFE]console page show error when vm is poused\n1950934 - Unnecessary rollouts can happen due to unsorted endpoints\n1950935 - Updating cluster-network-operator builder \u0026 base images to be consistent with ART\n1950978 - the ingressclass cannot be removed even after deleting the related custom ingresscontroller\n1951007 - ovn master pod crashed\n1951029 - Drainer panics on missing context for node patch\n1951034 - (release-4.8) Split up the GatherClusterOperators into smaller parts\n1951042 - Panics every few minutes in kubelet logs post-rebase\n1951043 - Start Pipeline Modal Parameters should accept empty string defaults\n1951058 - [gcp-pd-csi-driver-operator] topology and multipods capabilities are not enabled in e2e tests\n1951066 - [IBM][ROKS] Enable volume snapshot controllers on IBM Cloud\n1951084 - avoid benign \"Path \\\"/run/secrets/etc-pki-entitlement\\\" from \\\"/etc/containers/mounts.conf\\\" doesn\u0027t exist, skipping\" messages\n1951158 - Egress Router CRD missing Addresses entry\n1951169 - Improve API Explorer discoverability from the Console\n1951174 - re-pin libvirt to 6.0.0\n1951203 - oc adm catalog mirror can generate ICSPs that exceed etcd\u0027s size limit\n1951209 - RerunOnFailure runStrategy shows wrong VM status (Starting) on Succeeded VMI\n1951212 - User/Group details shows unrelated subjects in role bindings tab\n1951214 - VM list page crashes when the volume type is sysprep\n1951339 - Cluster-version operator does not manage operand container environments when manifest lacks opinions\n1951387 - opm index add doesn\u0027t respect deprecated bundles\n1951412 - Configmap gatherer can fail incorrectly\n1951456 - Docs and linting fixes\n1951486 - Replace \"kubevirt_vmi_network_traffic_bytes_total\" with new metrics names\n1951505 - Remove deprecated techPreviewUserWorkload field from CMO\u0027s configmap\n1951558 - Backport Upstream 101093 for Startup Probe Fix\n1951585 - enterprise-pod fails to build\n1951636 - assisted service operator use default serviceaccount in operator bundle\n1951637 - don\u0027t rollout a new kube-apiserver revision on oauth accessTokenInactivityTimeout changes\n1951639 - Bootstrap API server unclean shutdown causes reconcile delay\n1951646 - Unexpected memory climb while container not in use\n1951652 - Add retries to opm index add\n1951670 - Error gathering bootstrap log after pivot: The bootstrap machine did not execute the release-image.service systemd unit\n1951671 - Excessive writes to ironic Nodes\n1951705 - kube-apiserver needs alerts on CPU utlization\n1951713 - [OCP-OSP] After changing image in machine object it enters in Failed - Can\u0027t find created instance\n1951853 - dnses.operator.openshift.io resource\u0027s spec.nodePlacement.tolerations godoc incorrectly describes default behavior\n1951858 - unexpected text \u00270\u0027 on filter toolbar on RoleBinding tab\n1951860 - [4.8] add Intel XXV710 NIC model (1572) support in SR-IOV Operator\n1951870 - sriov network resources injector: user defined injection removed existing pod annotations\n1951891 - [migration] cannot change ClusterNetwork CIDR during migration\n1951952 - [AWS CSI Migration] Metrics for cloudprovider error requests are lost\n1952001 - Delegated authentication: reduce the number of watch requests\n1952032 - malformatted assets in CMO\n1952045 - Mirror nfs-server image used in jenkins-e2e\n1952049 - Helm: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1952079 - rebase openshift/sdn to kube 1.21\n1952111 - Optimize importing from @patternfly/react-tokens\n1952174 - DNS operator claims to be done upgrading before it even starts\n1952179 - OpenStack Provider Ports UI Underscore Variables\n1952187 - Pods stuck in ImagePullBackOff with errors like rpc error: code = Unknown desc = Error committing the finished image: image with ID \"SomeLongID\" already exists, but uses a different top layer: that ID\n1952211 - cascading mounts happening exponentially on when deleting openstack-cinder-csi-driver-node pods\n1952214 - Console Devfile Import Dev Preview broken\n1952238 - Catalog pods don\u0027t report termination logs to catalog-operator\n1952262 - Need support external gateway via hybrid overlay\n1952266 - etcd operator bumps status.version[name=operator] before operands update\n1952268 - etcd operator should not set Degraded=True EtcdMembersDegraded on healthy machine-config node reboots\n1952282 - CSR approver races with nodelink controller and does not requeue\n1952310 - VM cannot start up if the ssh key is added by another template\n1952325 - [e2e][automation] Check support modal in ssh tests and skip template parentSupport\n1952333 - openshift/kubernetes vulnerable to CVE-2021-3121\n1952358 - Openshift-apiserver CO unavailable in fresh OCP 4.7.5 installations\n1952367 - No VM status on overview page when VM is pending\n1952368 - worker pool went degraded due to no rpm-ostree on rhel worker during applying new mc\n1952372 - VM stop action should not be there if the VM is not running\n1952405 - console-operator is not reporting correct Available status\n1952448 - Switch from Managed to Disabled mode: no IP removed from configuration and no container metal3-static-ip-manager stopped\n1952460 - In k8s 1.21 bump \u0027[sig-network] Firewall rule control plane should not expose well-known ports\u0027 test is disabled\n1952473 - Monitor pod placement during upgrades\n1952487 - Template filter does not work properly\n1952495 - \u201cCreate\u201d button on the Templates page is confuse\n1952527 - [Multus] multi-networkpolicy does wrong filtering\n1952545 - Selection issue when inserting YAML snippets\n1952585 - Operator links for \u0027repository\u0027 and \u0027container image\u0027 should be clickable in OperatorHub\n1952604 - Incorrect port in external loadbalancer config\n1952610 - [aws] image-registry panics when the cluster is installed in a new region\n1952611 - Tracking bug for OCPCLOUD-1115 - support user-defined tags on AWS EC2 Instances\n1952618 - 4.7.4-\u003e4.7.8 Upgrade Caused OpenShift-Apiserver Outage\n1952625 - Fix translator-reported text issues\n1952632 - 4.8 installer should default ClusterVersion channel to stable-4.8\n1952635 - Web console displays a blank page- white space instead of cluster information\n1952665 - [Multus] multi-networkpolicy pod continue restart due to OOM (out of memory)\n1952666 - Implement Enhancement 741 for Kubelet\n1952667 - Update Readme for cluster-baremetal-operator with details about the operator\n1952684 - cluster-etcd-operator: metrics controller panics on invalid response from client\n1952728 - It was not clear for users why Snapshot feature was not available\n1952730 - \u201cCustomize virtual machine\u201d and the \u201cAdvanced\u201d feature are confusing in wizard\n1952732 - Users did not understand the boot source labels\n1952741 - Monitoring DB: after set Time Range as Custom time range, no data display\n1952744 - PrometheusDuplicateTimestamps with user workload monitoring enabled\n1952759 - [RFE]It was not immediately clear what the Star icon meant\n1952795 - cloud-network-config-controller CRD does not specify correct plural name\n1952819 - failed to configure pod interface: error while waiting on flows for pod: timed out waiting for OVS flows\n1952820 - [LSO] Delete localvolume pv is failed\n1952832 - [IBM][ROKS] Enable the Web console UI to deploy OCS in External mode on IBM Cloud\n1952891 - Upgrade failed due to cinder csi driver not deployed\n1952904 - Linting issues in gather/clusterconfig package\n1952906 - Unit tests for configobserver.go\n1952931 - CI does not check leftover PVs\n1952958 - Runtime error loading console in Safari 13\n1953019 - [Installer][baremetal][metal3] The baremetal IPI installer fails on delete cluster with: failed to clean baremetal bootstrap storage pool\n1953035 - Installer should error out if publish: Internal is set while deploying OCP cluster on any on-prem platform\n1953041 - openshift-authentication-operator uses 3.9k% of its requested CPU\n1953077 - Handling GCP\u0027s: Error 400: Permission accesscontextmanager.accessLevels.list is not valid for this resource\n1953102 - kubelet CPU use during an e2e run increased 25% after rebase\n1953105 - RHCOS system components registered a 3.5x increase in CPU use over an e2e run before and after 4/9\n1953169 - endpoint slice controller doesn\u0027t handle services target port correctly\n1953257 - Multiple EgressIPs per node for one namespace when \"oc get hostsubnet\"\n1953280 - DaemonSet/node-resolver is not recreated by dns operator after deleting it\n1953291 - cluster-etcd-operator: peer cert DNS SAN is populated incorrectly\n1953418 - [e2e][automation] Fix vm wizard validate tests\n1953518 - thanos-ruler pods failed to start up for \"cannot unmarshal DNS message\"\n1953530 - Fix openshift/sdn unit test flake\n1953539 - kube-storage-version-migrator: priorityClassName not set\n1953543 - (release-4.8) Add missing sample archive data\n1953551 - build failure: unexpected trampoline for shared or dynamic linking\n1953555 - GlusterFS tests fail on ipv6 clusters\n1953647 - prometheus-adapter should have a PodDisruptionBudget in HA topology\n1953670 - ironic container image build failing because esp partition size is too small\n1953680 - ipBlock ignoring all other cidr\u0027s apart from the last one specified\n1953691 - Remove unused mock\n1953703 - Inconsistent usage of Tech preview badge in OCS plugin of OCP Console\n1953726 - Fix issues related to loading dynamic plugins\n1953729 - e2e unidling test is flaking heavily on SNO jobs\n1953795 - Ironic can\u0027t virtual media attach ISOs sourced from ingress routes\n1953798 - GCP e2e (parallel and upgrade) regularly trigger KubeAPIErrorBudgetBurn alert, also happens on AWS\n1953803 - [AWS] Installer should do pre-check to ensure user-provided private hosted zone name is valid for OCP cluster\n1953810 - Allow use of storage policy in VMC environments\n1953830 - The oc-compliance build does not available for OCP4.8\n1953846 - SystemMemoryExceedsReservation alert should consider hugepage reservation\n1953977 - [4.8] packageserver pods restart many times on the SNO cluster\n1953979 - Ironic caching virtualmedia images results in disk space limitations\n1954003 - Alerts shouldn\u0027t report any alerts in firing or pending state: openstack-cinder-csi-driver-controller-metrics TargetDown\n1954025 - Disk errors while scaling up a node with multipathing enabled\n1954087 - Unit tests for kube-scheduler-operator\n1954095 - Apply user defined tags in AWS Internal Registry\n1954105 - TaskRuns Tab in PipelineRun Details Page makes cluster based calls for TaskRuns\n1954124 - oc set volume not adding storageclass to pvc which leads to issues using snapshots\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954177 - machine-api: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954187 - multus: admissionReviewVersions v1beta1 is going to be removed in 1.22\n1954248 - Disable Alertmanager Protractor e2e tests\n1954317 - [assisted operator] Environment variables set in the subscription not being inherited by the assisted-service container\n1954330 - NetworkPolicy: allow-from-router with label policy-group.network.openshift.io/ingress: \"\" does not work on a upgraded cluster\n1954421 - Get \u0027Application is not available\u0027 when access Prometheus UI\n1954459 - Error: Gateway Time-out display on Alerting console\n1954460 - UI, The status of \"Used Capacity Breakdown [Pods]\" is \"Not available\"\n1954509 - FC volume is marked as unmounted after failed reconstruction\n1954540 - Lack translation for local language on pages under storage menu\n1954544 - authn operator: endpoints controller should use the context it creates\n1954554 - Add e2e tests for auto node sizing\n1954566 - Cannot update a component (`UtilizationCard`) error when switching perspectives manually\n1954597 - Default image for GCP does not support ignition V3\n1954615 - Undiagnosed panic detected in pod: pods/openshift-cloud-credential-operator_cloud-credential-operator\n1954634 - apirequestcounts does not honor max users\n1954638 - apirequestcounts should indicate removedinrelease of empty instead of 2.0\n1954640 - Support of gatherers with different periods\n1954671 - disable volume expansion support in vsphere csi driver storage class\n1954687 - localvolumediscovery and localvolumset e2es are disabled\n1954688 - LSO has missing examples for localvolumesets\n1954696 - [API-1009] apirequestcounts should indicate useragent\n1954715 - Imagestream imports become very slow when doing many in parallel\n1954755 - Multus configuration should allow for net-attach-defs referenced in the openshift-multus namespace\n1954765 - CCO: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954768 - baremetal-operator: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component won\u0027t access APIs that trigger APIRemovedInNextReleaseInUse alert\n1954770 - Backport upstream fix for Kubelet getting stuck in DiskPressure\n1954773 - OVN: check (see bug 1947801#c4 steps) audit log to find deprecated API access related to this component to ensure this component does not trigger APIRemovedInNextReleaseInUse alert\n1954783 - [aws] support byo private hosted zone\n1954790 - KCM Alert PodDisruptionBudget At and Limit do not alert with maxUnavailable or MinAvailable by percentage\n1954830 - verify-client-go job is failing for release-4.7 branch\n1954865 - Add necessary priority class to pod-identity-webhook deployment\n1954866 - Add necessary priority class to downloads\n1954870 - Add necessary priority class to network components\n1954873 - dns server may not be specified for clusters with more than 2 dns servers specified by openstack. \n1954891 - Add necessary priority class to pruner\n1954892 - Add necessary priority class to ingress-canary\n1954931 - (release-4.8) Remove legacy URL anonymization in the ClusterOperator related resources\n1954937 - [API-1009] `oc get apirequestcount` shows blank for column REQUESTSINCURRENTHOUR\n1954959 - unwanted decorator shown for revisions in topology though should only be shown only for knative services\n1954972 - TechPreviewNoUpgrade featureset can be undone\n1954973 - \"read /proc/pressure/cpu: operation not supported\" in node-exporter logs\n1954994 - should update to 2.26.0 for prometheus resources label\n1955051 - metrics \"kube_node_status_capacity_cpu_cores\" does not exist\n1955089 - Support [sig-cli] oc observe works as expected test for IPv6\n1955100 - Samples: APIRemovedInNextReleaseInUse info alerts display\n1955102 - Add vsphere_node_hw_version_total metric to the collected metrics\n1955114 - 4.7-e2e-metal-ipi-ovn-dualstack intermittent test failures, worker hostname is overwritten by NM\n1955196 - linuxptp-daemon crash on 4.8\n1955226 - operator updates apirequestcount CRD over and over\n1955229 - release-openshift-origin-installer-e2e-aws-calico-4.7 is permfailing\n1955256 - stop collecting API that no longer exists\n1955324 - Kubernetes Autoscaler should use Go 1.16 for testing scripts\n1955336 - Failure to Install OpenShift on GCP due to Cluster Name being similar to / contains \"google\"\n1955414 - 4.8 -\u003e 4.7 rollbacks broken on unrecognized flowschema openshift-etcd-operator\n1955445 - Drop crio image metrics with high cardinality\n1955457 - Drop container_memory_failures_total metric because of high cardinality\n1955467 - Disable collection of node_mountstats_nfs metrics in node_exporter\n1955474 - [aws-ebs-csi-driver] rebase from version v1.0.0\n1955478 - Drop high-cardinality metrics from kube-state-metrics which aren\u0027t used\n1955517 - Failed to upgrade from 4.6.25 to 4.7.8 due to the machine-config degradation\n1955548 - [IPI][OSP] OCP 4.6/4.7 IPI with kuryr exceeds defined serviceNetwork range\n1955554 - MAO does not react to events triggered from Validating Webhook Configurations\n1955589 - thanos-querier should have a PodDisruptionBudget in HA topology\n1955595 - Add DevPreviewLongLifecycle Descheduler profile\n1955596 - Pods stuck in creation phase on realtime kernel SNO\n1955610 - release-openshift-origin-installer-old-rhcos-e2e-aws-4.7 is permfailing\n1955622 - 4.8-e2e-metal-assisted jobs: Timeout of 360 seconds expired waiting for Cluster to be in status [\u0027installing\u0027, \u0027error\u0027]\n1955701 - [4.8] RHCOS boot image bump for RHEL 8.4 Beta\n1955749 - OCP branded templates need to be translated\n1955761 - packageserver clusteroperator does not set reason or message for Available condition\n1955783 - NetworkPolicy: ACL audit log message for allow-from-router policy should also include the namespace to distinguish between two policies similarly named configured in respective namespaces\n1955803 - OperatorHub - console accepts any value for \"Infrastructure features\" annotation\n1955822 - CIS Benchmark 5.4.1 Fails on ROKS 4: Prefer using secrets as files over secrets as environment variables\n1955854 - Ingress clusteroperator reports Degraded=True/Available=False if any ingresscontroller is degraded or unavailable\n1955862 - Local Storage Operator using LocalVolume CR fails to create PV\u0027s when backend storage failure is simulated\n1955874 - Webscale: sriov vfs are not created and sriovnetworknodestate indicates sync succeeded - state is not correct\n1955879 - Customer tags cannot be seen in S3 level when set spec.managementState from Managed-\u003e Removed-\u003e Managed in configs.imageregistry with high ratio\n1955969 - Workers cannot be deployed attached to multiple networks. \n1956079 - Installer gather doesn\u0027t collect any networking information\n1956208 - Installer should validate root volume type\n1956220 - Set htt proxy system properties as expected by kubernetes-client\n1956281 - Disconnected installs are failing with kubelet trying to pause image from the internet\n1956334 - Event Listener Details page does not show Triggers section\n1956353 - test: analyze job consistently fails\n1956372 - openshift-gcp-routes causes disruption during upgrade by stopping before all pods terminate\n1956405 - Bump k8s dependencies in cluster resource override admission operator\n1956411 - Apply custom tags to AWS EBS volumes\n1956480 - [4.8] Bootimage bump tracker\n1956606 - probes FlowSchema manifest not included in any cluster profile\n1956607 - Multiple manifests lack cluster profile annotations\n1956609 - [cluster-machine-approver] CSRs for replacement control plane nodes not approved after restore from backup\n1956610 - manage-helm-repos manifest lacks cluster profile annotations\n1956611 - OLM CRD schema validation failing against CRs where the value of a string field is a blank string\n1956650 - The container disk URL is empty for Windows guest tools\n1956768 - aws-ebs-csi-driver-controller-metrics TargetDown\n1956826 - buildArgs does not work when the value is taken from a secret\n1956895 - Fix chatty kubelet log message\n1956898 - fix log files being overwritten on container state loss\n1956920 - can\u0027t open terminal for pods that have more than one container running\n1956959 - ipv6 disconnected sno crd deployment hive reports success status and clusterdeployrmet reporting false\n1956978 - Installer gather doesn\u0027t include pod names in filename\n1957039 - Physical VIP for pod -\u003e Svc -\u003e Host is incorrectly set to an IP of 169.254.169.2 for Local GW\n1957041 - Update CI e2echart with more node info\n1957127 - Delegated authentication: reduce the number of watch requests\n1957131 - Conformance tests for OpenStack require the Cinder client that is not included in the \"tests\" image\n1957146 - Only run test/extended/router/idle tests on OpenshiftSDN or OVNKubernetes\n1957149 - CI: \"Managed cluster should start all core operators\" fails with: OpenStackCinderDriverStaticResourcesControllerDegraded: \"volumesnapshotclass.yaml\" (string): missing dynamicClient\n1957179 - Incorrect VERSION in node_exporter\n1957190 - CI jobs failing due too many watch requests (prometheus-operator)\n1957198 - Misspelled console-operator condition\n1957227 - Issue replacing the EnvVariables using the unsupported ConfigMap\n1957260 - [4.8] [gcp] Installer is missing new region/zone europe-central2\n1957261 - update godoc for new build status image change trigger fields\n1957295 - Apply priority classes conventions as test to openshift/origin repo\n1957315 - kuryr-controller doesn\u0027t indicate being out of quota\n1957349 - [Azure] Machine object showing Failed phase even node is ready and VM is running properly\n1957374 - mcddrainerr doesn\u0027t list specific pod\n1957386 - Config serve and validate command should be under alpha\n1957446 - prepare CCO for future without v1beta1 CustomResourceDefinitions\n1957502 - Infrequent panic in kube-apiserver in aws-serial job\n1957561 - lack of pseudolocalization for some text on Cluster Setting page\n1957584 - Routes are not getting created when using hostname without FQDN standard\n1957597 - Public DNS records were not deleted when destroying a cluster which is using byo private hosted zone\n1957645 - Event \"Updated PrometheusRule.monitoring.coreos.com/v1 because it changed\" is frequently looped with weird empty {} changes\n1957708 - e2e-metal-ipi and related jobs fail to bootstrap due to multiple VIP\u0027s\n1957726 - Pod stuck in ContainerCreating - Failed to start transient scope unit: Connection timed out\n1957748 - Ptp operator pod should have CPU and memory requests set but not limits\n1957756 - Device Replacemet UI, The status of the disk is \"replacement ready\" before I clicked on \"start replacement\"\n1957772 - ptp daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1957775 - CVO creating cloud-controller-manager too early causing upgrade failures\n1957809 - [OSP] Install with invalid platform.openstack.machinesSubnet results in runtime error\n1957822 - Update apiserver tlsSecurityProfile description to include Custom profile\n1957832 - CMO end-to-end tests work only on AWS\n1957856 - \u0027resource name may not be empty\u0027 is shown in CI testing\n1957869 - baremetal IPI power_interface for irmc is inconsistent\n1957879 - cloud-controller-manage ClusterOperator manifest does not declare relatedObjects\n1957889 - Incomprehensible documentation of the GatherClusterOperatorPodsAndEvents gatherer\n1957893 - ClusterDeployment / Agent conditions show \"ClusterAlreadyInstalling\" during each spoke install\n1957895 - Cypress helper projectDropdown.shouldContain is not an assertion\n1957908 - Many e2e failed requests caused by kube-storage-version-migrator-operator\u0027s version reads\n1957926 - \"Add Capacity\" should allow to add n*3 (or n*4) local devices at once\n1957951 - [aws] destroy can get blocked on instances stuck in shutting-down state\n1957967 - Possible test flake in listPage Cypress view\n1957972 - Leftover templates from mdns\n1957976 - Ironic execute_deploy_steps command to ramdisk times out, resulting in a failed deployment in 4.7\n1957982 - Deployment Actions clickable for view-only projects\n1957991 - ClusterOperatorDegraded can fire during installation\n1958015 - \"config-reloader-cpu\" and \"config-reloader-memory\" flags have been deprecated for prometheus-operator\n1958080 - Missing i18n for login, error and selectprovider pages\n1958094 - Audit log files are corrupted sometimes\n1958097 - don\u0027t show \"old, insecure token format\" if the token does not actually exist\n1958114 - Ignore staged vendor files in pre-commit script\n1958126 - [OVN]Egressip doesn\u0027t take effect\n1958158 - OAuth proxy container for AlertManager and Thanos are flooding the logs\n1958216 - ocp libvirt: dnsmasq options in install config should allow duplicate option names\n1958245 - cluster-etcd-operator: static pod revision is not visible from etcd logs\n1958285 - Deployment considered unhealthy despite being available and at latest generation\n1958296 - OLM must explicitly alert on deprecated APIs in use\n1958329 - pick 97428: add more context to log after a request times out\n1958367 - Build metrics do not aggregate totals by build strategy\n1958391 - Update MCO KubeletConfig to mixin the API Server TLS Security Profile Singleton\n1958405 - etcd: current health checks and reporting are not adequate to ensure availability\n1958406 - Twistlock flags mode of /var/run/crio/crio.sock\n1958420 - openshift-install 4.7.10 fails with segmentation error\n1958424 - aws: support more auth options in manual mode\n1958439 - Install/Upgrade button on Install/Upgrade Helm Chart page does not work with Form View\n1958492 - CCO: pod-identity-webhook still accesses APIRemovedInNextReleaseInUse\n1958643 - All pods creation stuck due to SR-IOV webhook timeout\n1958679 - Compression on pool can\u0027t be disabled via UI\n1958753 - VMI nic tab is not loadable\n1958759 - Pulling Insights report is missing retry logic\n1958811 - VM creation fails on API version mismatch\n1958812 - Cluster upgrade halts as machine-config-daemon fails to parse `rpm-ostree status` during cluster upgrades\n1958861 - [CCO] pod-identity-webhook certificate request failed\n1958868 - ssh copy is missing when vm is running\n1958884 - Confusing error message when volume AZ not found\n1958913 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1958930 - network config in machine configs prevents addition of new nodes with static networking via kargs\n1958958 - [SCALE] segfault with ovnkube adding to address set\n1958972 - [SCALE] deadlock in ovn-kube when scaling up to 300 nodes\n1959041 - LSO Cluster UI,\"Troubleshoot\" link does not exist after scale down osd pod\n1959058 - ovn-kubernetes has lock contention on the LSP cache\n1959158 - packageserver clusteroperator Available condition set to false on any Deployment spec change\n1959177 - Descheduler dev manifests are missing permissions\n1959190 - Set LABEL io.openshift.release.operator=true for driver-toolkit image addition to payload\n1959194 - Ingress controller should use minReadySeconds because otherwise it is disrupted during deployment updates\n1959278 - Should remove prometheus servicemonitor from openshift-user-workload-monitoring\n1959294 - openshift-operator-lifecycle-manager:olm-operator-serviceaccount should not rely on external networking for health check\n1959327 - Degraded nodes on upgrade - Cleaning bootversions: Read-only file system\n1959406 - Difficult to debug performance on ovn-k without pprof enabled\n1959471 - Kube sysctl conformance tests are disabled, meaning we can\u0027t submit conformance results\n1959479 - machines doesn\u0027t support dual-stack loadbalancers on Azure\n1959513 - Cluster-kube-apiserver does not use library-go for audit pkg\n1959519 - Operand details page only renders one status donut no matter how many \u0027podStatuses\u0027 descriptors are used\n1959550 - Overly generic CSS rules for dd and dt elements breaks styling elsewhere in console\n1959564 - Test verify /run filesystem contents failing\n1959648 - oc adm top --help indicates that oc adm top can display storage usage while it cannot\n1959650 - Gather SDI-related MachineConfigs\n1959658 - showing a lot \"constructing many client instances from the same exec auth config\"\n1959696 - Deprecate \u0027ConsoleConfigRoute\u0027 struct in console-operator config\n1959699 - [RFE] Collect LSO pod log and daemonset log managed by LSO\n1959703 - Bootstrap gather gets into an infinite loop on bootstrap-in-place mode\n1959711 - Egressnetworkpolicy doesn\u0027t work when configure the EgressIP\n1959786 - [dualstack]EgressIP doesn\u0027t work on dualstack cluster for IPv6\n1959916 - Console not works well against a proxy in front of openshift clusters\n1959920 - UEFISecureBoot set not on the right master node\n1959981 - [OCPonRHV] - Affinity Group should not create by default if we define empty affinityGroupsNames: []\n1960035 - iptables is missing from ose-keepalived-ipfailover image\n1960059 - Remove \"Grafana UI\" link from Console Monitoring \u003e Dashboards page\n1960089 - ImageStreams list page, detail page and breadcrumb are not following CamelCase conventions\n1960129 - [e2e][automation] add smoke tests about VM pages and actions\n1960134 - some origin images are not public\n1960171 - Enable SNO checks for image-registry\n1960176 - CCO should recreate a user for the component when it was removed from the cloud providers\n1960205 - The kubelet log flooded with reconcileState message once CPU manager enabled\n1960255 - fixed obfuscation permissions\n1960257 - breaking changes in pr template\n1960284 - ExternalTrafficPolicy Local does not preserve connections correctly on shutdown, policy Cluster has significant performance cost\n1960323 - Address issues raised by coverity security scan\n1960324 - manifests: extra \"spec.version\" in console quickstarts makes CVO hotloop\n1960330 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960334 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960337 - manifests: invalid selector in ServiceMonitor makes CVO hotloop\n1960339 - manifests: unset \"preemptionPolicy\" makes CVO hotloop\n1960531 - Items under \u0027Current Bandwidth\u0027 for Dashboard \u0027Kubernetes / Networking / Pod\u0027 keep added for every access\n1960534 - Some graphs of console dashboards have no legend and tooltips are difficult to undstand compared with grafana\n1960546 - Add virt_platform metric to the collected metrics\n1960554 - Remove rbacv1beta1 handling code\n1960612 - Node disk info in overview/details does not account for second drive where /var is located\n1960619 - Image registry integration tests use old-style OAuth tokens\n1960683 - GlobalConfigPage is constantly requesting resources\n1960711 - Enabling IPsec runtime causing incorrect MTU on Pod interfaces\n1960716 - Missing details for debugging\n1960732 - Outdated manifests directory in CSI driver operator repositories\n1960757 - [OVN] hostnetwork pod can access MCS port 22623 or 22624 on master\n1960758 - oc debug / oc adm must-gather do not require openshift/tools and openshift/must-gather to be \"the newest\"\n1960767 - /metrics endpoint of the Grafana UI is accessible without authentication\n1960780 - CI: failed to create PDB \"service-test\" the server could not find the requested resource\n1961064 - Documentation link to network policies is outdated\n1961067 - Improve log gathering logic\n1961081 - policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget in CMO logs\n1961091 - Gather MachineHealthCheck definitions\n1961120 - CSI driver operators fail when upgrading a cluster\n1961173 - recreate existing static pod manifests instead of updating\n1961201 - [sig-network-edge] DNS should answer A and AAAA queries for a dual-stack service is constantly failing\n1961314 - Race condition in operator-registry pull retry unit tests\n1961320 - CatalogSource does not emit any metrics to indicate if it\u0027s ready or not\n1961336 - Devfile sample for BuildConfig is not defined\n1961356 - Update single quotes to double quotes in string\n1961363 - Minor string update for \" No Storage classes found in cluster, adding source is disabled.\"\n1961393 - DetailsPage does not work with group~version~kind\n1961452 - Remove \"Alertmanager UI\" link from Console Monitoring \u003e Alerting page\n1961466 - Some dropdown placeholder text on route creation page is not translated\n1961472 - openshift-marketplace pods in CrashLoopBackOff state after RHACS installed with an SCC with readOnlyFileSystem set to true\n1961506 - NodePorts do not work on RHEL 7.9 workers (was \"4.7 -\u003e 4.8 upgrade is stuck at Ingress operator Degraded with rhel 7.9 workers\")\n1961536 - clusterdeployment without pull secret is crashing assisted service pod\n1961538 - manifests: invalid namespace in ClusterRoleBinding makes CVO hotloop\n1961545 - Fixing Documentation Generation\n1961550 - HAproxy pod logs showing error \"another server named \u0027pod:httpd-7c7ccfffdc-wdkvk:httpd:8080-tcp:10.128.x.x:8080\u0027 was already defined at line 326, please use distinct names\"\n1961554 - respect the shutdown-delay-duration from OpenShiftAPIServerConfig\n1961561 - The encryption controllers send lots of request to an API server\n1961582 - Build failure on s390x\n1961644 - NodeAuthenticator tests are failing in IPv6\n1961656 - driver-toolkit missing some release metadata\n1961675 - Kebab menu of taskrun contains Edit options which should not be present\n1961701 - Enhance gathering of events\n1961717 - Update runtime dependencies to Wallaby builds for bugfixes\n1961829 - Quick starts prereqs not shown when description is long\n1961852 - Excessive lock contention when adding many pods selected by the same NetworkPolicy\n1961878 - Add Sprint 199 translations\n1961897 - Remove history listener before console UI is unmounted\n1961925 - New ManagementCPUsOverride admission plugin blocks pod creation in clusters with no nodes\n1962062 - Monitoring dashboards should support default values of \"All\"\n1962074 - SNO:the pod get stuck in CreateContainerError and prompt \"failed to add conmon to systemd sandbox cgroup: dial unix /run/systemd/private: connect: resource temporarily unavailable\" after adding a performanceprofile\n1962095 - Replace gather-job image without FQDN\n1962153 - VolumeSnapshot routes are ambiguous, too generic\n1962172 - Single node CI e2e tests kubelet metrics endpoints intermittent downtime\n1962219 - NTO relies on unreliable leader-for-life implementation. \n1962256 - use RHEL8 as the vm-example\n1962261 - Monitoring components requesting more memory than they use\n1962274 - OCP on RHV installer fails to generate an install-config with only 2 hosts in RHV cluster\n1962347 - Cluster does not exist logs after successful installation\n1962392 - After upgrade from 4.5.16 to 4.6.17, customer\u0027s application is seeing re-transmits\n1962415 - duplicate zone information for in-tree PV after enabling migration\n1962429 - Cannot create windows vm because kubemacpool.io denied the request\n1962525 - [Migration] SDN migration stuck on MCO on RHV cluster\n1962569 - NetworkPolicy details page should also show Egress rules\n1962592 - Worker nodes restarting during OS installation\n1962602 - Cloud credential operator scrolls info \"unable to provide upcoming...\" on unsupported platform\n1962630 - NTO: Ship the current upstream TuneD\n1962687 - openshift-kube-storage-version-migrator pod failed due to Error: container has runAsNonRoot and image will run as root\n1962698 - Console-operator can not create resource console-public configmap in the openshift-config-managed namespace\n1962718 - CVE-2021-29622 prometheus: open redirect under the /new endpoint\n1962740 - Add documentation to Egress Router\n1962850 - [4.8] Bootimage bump tracker\n1962882 - Version pod does not set priorityClassName\n1962905 - Ramdisk ISO source defaulting to \"http\" breaks deployment on a good amount of BMCs\n1963068 - ironic container should not specify the entrypoint\n1963079 - KCM/KS: ability to enforce localhost communication with the API server. \n1963154 - Current BMAC reconcile flow skips Ironic\u0027s deprovision step\n1963159 - Add Sprint 200 translations\n1963204 - Update to 8.4 IPA images\n1963205 - Installer is using old redirector\n1963208 - Translation typos/inconsistencies for Sprint 200 files\n1963209 - Some strings in public.json have errors\n1963211 - Fix grammar issue in kubevirt-plugin.json string\n1963213 - Memsource download script running into API error\n1963219 - ImageStreamTags not internationalized\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n1963267 - Warning: Invalid DOM property `classname`. Did you mean `className`? console warnings in volumes table\n1963502 - create template from is not descriptive\n1963676 - in vm wizard when selecting an os template it looks like selecting the flavor too\n1963833 - Cluster monitoring operator crashlooping on single node clusters due to segfault\n1963848 - Use OS-shipped stalld vs. the NTO-shipped one. \n1963866 - NTO: use the latest k8s 1.21.1 and openshift vendor dependencies\n1963871 - cluster-etcd-operator:[build] upgrade to go 1.16\n1963896 - The VM disks table does not show easy links to PVCs\n1963912 - \"[sig-network] DNS should provide DNS for {services, cluster, subdomain, hostname}\" failures on vsphere\n1963932 - Installation failures in bootstrap in OpenStack release jobs\n1963964 - Characters are not escaped on config ini file causing Kuryr bootstrap to fail\n1964059 - rebase openshift/sdn to kube 1.21.1\n1964197 - Failing Test vendor/k8s.io/kube-aggregator/pkg/apiserver TestProxyCertReload due to hardcoded certificate expiration\n1964203 - e2e-metal-ipi, e2e-metal-ipi-ovn-dualstack and e2e-metal-ipi-ovn-ipv6 are failing due to \"Unknown provider baremetal\"\n1964243 - The `oc compliance fetch-raw` doesn\u2019t work for disconnected cluster\n1964270 - Failed to install \u0027cluster-kube-descheduler-operator\u0027 with error: \"clusterkubedescheduleroperator.4.8.0-202105211057.p0.assembly.stream\\\": must be no more than 63 characters\"\n1964319 - Network policy \"deny all\" interpreted as \"allow all\" in description page\n1964334 - alertmanager/prometheus/thanos-querier /metrics endpoints are not secured\n1964472 - Make project and namespace requirements more visible rather than giving me an error after submission\n1964486 - Bulk adding of CIDR IPS to whitelist is not working\n1964492 - Pick 102171: Implement support for watch initialization in P\u0026F\n1964625 - NETID duplicate check is only required in NetworkPolicy Mode\n1964748 - Sync upstream 1.7.2 downstream\n1964756 - PVC status is always in \u0027Bound\u0027 status when it is actually cloning\n1964847 - Sanity check test suite missing from the repo\n1964888 - opoenshift-apiserver imagestreamimports depend on \u003e34s timeout support, WAS: transport: loopyWriter.run returning. connection error: desc = \"transport is closing\"\n1964936 - error log for \"oc adm catalog mirror\" is not correct\n1964979 - Add mapping from ACI to infraenv to handle creation order issues\n1964997 - Helm Library charts are showing and can be installed from Catalog\n1965024 - [DR] backup and restore should perform consistency checks on etcd snapshots\n1965092 - [Assisted-4.7] [Staging][OLM] Operators deployments start before all workers finished installation\n1965283 - 4.7-\u003e4.8 upgrades: cluster operators are not ready: openshift-controller-manager (Upgradeable=Unknown NoData: ), service-ca (Upgradeable=Unknown NoData:\n1965330 - oc image extract fails due to security capabilities on files\n1965334 - opm index add fails during image extraction\n1965367 - Typo in in etcd-metric-serving-ca resource name\n1965370 - \"Route\" is not translated in Korean or Chinese\n1965391 - When storage class is already present wizard do not jumps to \"Stoarge and nodes\"\n1965422 - runc is missing Provides oci-runtime in rpm spec\n1965522 - [v2v] Multiple typos on VM Import screen\n1965545 - Pod stuck in ContainerCreating: Unit ...slice already exists\n1965909 - Replace \"Enable Taint Nodes\" by \"Mark nodes as dedicated\"\n1965921 - [oVirt] High performance VMs shouldn\u0027t be created with Existing policy\n1965929 - kube-apiserver should use cert auth when reaching out to the oauth-apiserver with a TokenReview request\n1966077 - `hidden` descriptor is visible in the Operator instance details page`\n1966116 - DNS SRV request which worked in 4.7.9 stopped working in 4.7.11\n1966126 - root_ca_cert_publisher_sync_duration_seconds metric can have an excessive cardinality\n1966138 - (release-4.8) Update K8s \u0026 OpenShift API versions\n1966156 - Issue with Internal Registry CA on the service pod\n1966174 - No storage class is installed, OCS and CNV installations fail\n1966268 - Workaround for Network Manager not supporting nmconnections priority\n1966401 - Revamp Ceph Table in Install Wizard flow\n1966410 - kube-controller-manager should not trigger APIRemovedInNextReleaseInUse alert\n1966416 - (release-4.8) Do not exceed the data size limit\n1966459 - \u0027policy/v1beta1 PodDisruptionBudget\u0027 and \u0027batch/v1beta1 CronJob\u0027 appear in image-registry-operator log\n1966487 - IP address in Pods list table are showing node IP other than pod IP\n1966520 - Add button from ocs add capacity should not be enabled if there are no PV\u0027s\n1966523 - (release-4.8) Gather MachineAutoScaler definitions\n1966546 - [master] KubeAPI - keep day1 after cluster is successfully installed\n1966561 - Workload partitioning annotation workaround needed for CSV annotation propagation bug\n1966602 - don\u0027t require manually setting IPv6DualStack feature gate in 4.8\n1966620 - The bundle.Dockerfile in the repo is obsolete\n1966632 - [4.8.0] [assisted operator] Unable to re-register an SNO instance if deleting CRDs during install\n1966654 - Alertmanager PDB is not created, but Prometheus UWM is\n1966672 - Add Sprint 201 translations\n1966675 - Admin console string updates\n1966677 - Change comma to semicolon\n1966683 - Translation bugs from Sprint 201 files\n1966684 - Verify \"Creating snapshot for claim \u003c1\u003e{pvcName}\u003c/1\u003e\" displays correctly\n1966697 - Garbage collector logs every interval - move to debug level\n1966717 - include full timestamps in the logs\n1966759 - Enable downstream plugin for Operator SDK\n1966795 - [tests] Release 4.7 broken due to the usage of wrong OCS version\n1966813 - \"Replacing an unhealthy etcd member whose node is not ready\" procedure results in new etcd pod in CrashLoopBackOff\n1966862 - vsphere IPI - local dns prepender is not prepending nameserver 127.0.0.1\n1966892 - [master] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkub[e\"\n1966952 - [4.8.0] [Assisted-4.8][SNO][Dual Stack] DHCPv6 settings \"ipv6.dhcp-duid=ll\" missing from dual stack install\n1967104 - [4.8.0] InfraEnv ctrl: log the amount of NMstate Configs baked into the image\n1967126 - [4.8.0] [DOC] KubeAPI docs should clarify that the InfraEnv Spec pullSecretRef is currently ignored\n1967197 - 404 errors loading some i18n namespaces\n1967207 - Getting started card: console customization resources link shows other resources\n1967208 - Getting started card should use semver library for parsing the version instead of string manipulation\n1967234 - Console is continuously polling for ConsoleLink acm-link\n1967275 - Awkward wrapping in getting started dashboard card\n1967276 - Help menu tooltip overlays dropdown\n1967398 - authentication operator still uses previous deleted pod ip rather than the new created pod ip to do health check\n1967403 - (release-4.8) Increase workloads fingerprint gatherer pods limit\n1967423 - [master] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967444 - openshift-local-storage pods found with invalid priority class, should be openshift-user-critical or begin with system- while running e2e tests\n1967531 - the ccoctl tool should extend MaxItems when listRoles, the default value 100 is a little small\n1967578 - [4.8.0] clusterDeployments controller should take 1m to reqeueue when failing with AddOpenshiftVersion\n1967591 - The ManagementCPUsOverride admission plugin should not mutate containers with the limit\n1967595 - Fixes the remaining lint issues\n1967614 - prometheus-k8s pods can\u0027t be scheduled due to volume node affinity conflict\n1967623 - [OCPonRHV] - ./openshift-install installation with install-config doesn\u0027t work if ovirt-config.yaml doesn\u0027t exist and user should fill the FQDN URL\n1967625 - Add OpenShift Dockerfile for cloud-provider-aws\n1967631 - [4.8.0] Cluster install failed due to timeout while \"Waiting for control plane\"\n1967633 - [4.8.0] [Assisted-4.8][SNO] SNO node cannot transition into \"Writing image to disk\" from \"Waiting for bootkube\"\n1967639 - Console whitescreens if user preferences fail to load\n1967662 - machine-api-operator should not use deprecated \"platform\" field in infrastructures.config.openshift.io\n1967667 - Add Sprint 202 Round 1 translations\n1967713 - Insights widget shows invalid link to the OCM\n1967717 - Insights Advisor widget is missing a description paragraph and contains deprecated naming\n1967745 - When setting DNS node placement by toleration to not tolerate master node, effect value should not allow string other than \"NoExecute\"\n1967803 - should update to 7.5.5 for grafana resources version label\n1967832 - Add more tests for periodic.go\n1967833 - Add tasks pool to tasks_processing\n1967842 - Production logs are spammed on \"OCS requirements validation status Insufficient hosts to deploy OCS. A minimum of 3 hosts is required to deploy OCS\"\n1967843 - Fix null reference to messagesToSearch in gather_logs.go\n1967902 - [4.8.0] Assisted installer chrony manifests missing index numberring\n1967933 - Network-Tools debug scripts not working as expected\n1967945 - [4.8.0] [assisted operator] Assisted Service Postgres crashes msg: \"mkdir: cannot create directory \u0027/var/lib/pgsql/data/userdata\u0027: Permission denied\"\n1968019 - drain timeout and pool degrading period is too short\n1968067 - [master] Agent validation not including reason for being insufficient\n1968168 - [4.8.0] KubeAPI - keep day1 after cluster is successfully installed\n1968175 - [4.8.0] Agent validation not including reason for being insufficient\n1968373 - [4.8.0] BMAC re-attaches installed node on ISO regeneration\n1968385 - [4.8.0] Infra env require pullSecretRef although it shouldn\u0027t be required\n1968435 - [4.8.0] Unclear message in case of missing clusterImageSet\n1968436 - Listeners timeout updated to remain using default value\n1968449 - [4.8.0] Wrong Install-config override documentation\n1968451 - [4.8.0] Garbage collector not cleaning up directories of removed clusters\n1968452 - [4.8.0] [doc] \"Mirror Registry Configuration\" doc section needs clarification of functionality and limitations\n1968454 - [4.8.0] backend events generated with wrong namespace for agent\n1968455 - [4.8.0] Assisted Service operator\u0027s controllers are starting before the base service is ready\n1968515 - oc should set user-agent when talking with registry\n1968531 - Sync upstream 1.8.0 downstream\n1968558 - [sig-cli] oc adm storage-admin [Suite:openshift/conformance/parallel] doesn\u0027t clean up properly\n1968567 - [OVN] Egress router pod not running and openshift.io/scc is restricted\n1968625 - Pods using sr-iov interfaces failign to start for Failed to create pod sandbox\n1968700 - catalog-operator crashes when status.initContainerStatuses[].state.waiting is nil\n1968701 - Bare metal IPI installation is failed due to worker inspection failure\n1968754 - CI: e2e-metal-ipi-upgrade failing on KubeletHasDiskPressure, which triggers machine-config RequiredPoolsFailed\n1969212 - [FJ OCP4.8 Bug - PUBLIC VERSION]: Masters repeat reboot every few minutes during workers provisioning\n1969284 - Console Query Browser: Can\u0027t reset zoom to fixed time range after dragging to zoom\n1969315 - [4.8.0] BMAC doesn\u0027t check if ISO Url changed before queuing BMH for reconcile\n1969352 - [4.8.0] Creating BareMetalHost without the \"inspect.metal3.io\" does not automatically add it\n1969363 - [4.8.0] Infra env should show the time that ISO was generated. \n1969367 - [4.8.0] BMAC should wait for an ISO to exist for 1 minute before using it\n1969386 - Filesystem\u0027s Utilization doesn\u0027t show in VM overview tab\n1969397 - OVN bug causing subports to stay DOWN fails installations\n1969470 - [4.8.0] Misleading error in case of install-config override bad input\n1969487 - [FJ OCP4.8 Bug]: Avoid always do delete_configuration clean step\n1969525 - Replace golint with revive\n1969535 - Topology edit icon does not link correctly when branch name contains slash\n1969538 - Install a VolumeSnapshotClass by default on CSI Drivers that support it\n1969551 - [4.8.0] Assisted service times out on GetNextSteps due to `oc adm release info` taking too long\n1969561 - Test \"an end user can use OLM can subscribe to the operator\" generates deprecation alert\n1969578 - installer: accesses v1beta1 RBAC APIs and causes APIRemovedInNextReleaseInUse to fire\n1969599 - images without registry are being prefixed with registry.hub.docker.com instead of docker.io\n1969601 - manifest for networks.config.openshift.io CRD uses deprecated apiextensions.k8s.io/v1beta1\n1969626 - Portfoward stream cleanup can cause kubelet to panic\n1969631 - EncryptionPruneControllerDegraded: etcdserver: request timed out\n1969681 - MCO: maxUnavailable of ds/machine-config-daemon does not get updated due to missing resourcemerge check\n1969712 - [4.8.0] Assisted service reports a malformed iso when we fail to download the base iso\n1969752 - [4.8.0] [assisted operator] Installed Clusters are missing DNS setups\n1969773 - [4.8.0] Empty cluster name on handleEnsureISOErrors log after applying InfraEnv.yaml\n1969784 - WebTerminal widget should send resize events\n1969832 - Applying a profile with multiple inheritance where parents include a common ancestor fails\n1969891 - Fix rotated pipelinerun status icon issue in safari\n1969900 - Test files should not use deprecated APIs that will trigger APIRemovedInNextReleaseInUse\n1969903 - Provisioning a large number of hosts results in an unexpected delay in hosts becoming available\n1969951 - Cluster local doesn\u0027t work for knative services created from dev console\n1969969 - ironic-rhcos-downloader container uses and old base image\n1970062 - ccoctl does not work with STS authentication\n1970068 - ovnkube-master logs \"Failed to find node ips for gateway\" error\n1970126 - [4.8.0] Disable \"metrics-events\" when deploying using the operator\n1970150 - master pool is still upgrading when machine config reports level / restarts on osimageurl change\n1970262 - [4.8.0] Remove Agent CRD Status fields not needed\n1970265 - [4.8.0] Add State and StateInfo to DebugInfo in ACI and Agent CRDs\n1970269 - [4.8.0] missing role in agent CRD\n1970271 - [4.8.0] Add ProgressInfo to Agent and AgentClusterInstalll CRDs\n1970381 - Monitoring dashboards: Custom time range inputs should retain their values\n1970395 - [4.8.0] SNO with AI/operator - kubeconfig secret is not created until the spoke is deployed\n1970401 - [4.8.0] AgentLabelSelector is required yet not supported\n1970415 - SR-IOV Docs needs documentation for disabling port security on a network\n1970470 - Add pipeline annotation to Secrets which are created for a private repo\n1970494 - [4.8.0] Missing value-filling of log line in assisted-service operator pod\n1970624 - 4.7-\u003e4.8 updates: AggregatedAPIDown for v1beta1.metrics.k8s.io\n1970828 - \"500 Internal Error\" for all openshift-monitoring routes\n1970975 - 4.7 -\u003e 4.8 upgrades on AWS take longer than expected\n1971068 - Removing invalid AWS instances from the CF templates\n1971080 - 4.7-\u003e4.8 CI: KubePodNotReady due to MCD\u0027s 5m sleep between drain attempts\n1971188 - Web Console does not show OpenShift Virtualization Menu with VirtualMachine CRDs of version v1alpha3 !\n1971293 - [4.8.0] Deleting agent from one namespace causes all agents with the same name to be deleted from all namespaces\n1971308 - [4.8.0] AI KubeAPI AgentClusterInstall confusing \"Validated\" condition about VIP not matching machine network\n1971529 - [Dummy bug for robot] 4.7.14 upgrade to 4.8 and then downgrade back to 4.7.14 doesn\u0027t work - clusteroperator/kube-apiserver is not upgradeable\n1971589 - [4.8.0] Telemetry-client won\u0027t report metrics in case the cluster was installed using the assisted operator\n1971630 - [4.8.0] ACM/ZTP with Wan emulation fails to start the agent service\n1971632 - [4.8.0] ACM/ZTP with Wan emulation, several clusters fail to step past discovery\n1971654 - [4.8.0] InfraEnv controller should always requeue for backend response HTTP StatusConflict (code 409)\n1971739 - Keep /boot RW when kdump is enabled\n1972085 - [4.8.0] Updating configmap within AgentServiceConfig is not logged properly\n1972128 - ironic-static-ip-manager container still uses 4.7 base image\n1972140 - [4.8.0] ACM/ZTP with Wan emulation, SNO cluster installs do not show as installed although they are\n1972167 - Several operators degraded because Failed to create pod sandbox when installing an sts cluster\n1972213 - Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1972262 - [4.8.0] \"baremetalhost.metal3.io/detached\" uses boolean value where string is expected\n1972426 - Adopt failure can trigger deprovisioning\n1972436 - [4.8.0] [DOCS] AgentServiceConfig examples in operator.md doc should each contain databaseStorage + filesystemStorage\n1972526 - [4.8.0] clusterDeployments controller should send an event to InfraEnv for backend cluster registration\n1972530 - [4.8.0] no indication for missing debugInfo in AgentClusterInstall\n1972565 - performance issues due to lost node, pods taking too long to relaunch\n1972662 - DPDK KNI modules need some additional tools\n1972676 - Requirements for authenticating kernel modules with X.509\n1972687 - Using bound SA tokens causes causes failures to /apis/authorization.openshift.io/v1/clusterrolebindings\n1972690 - [4.8.0] infra-env condition message isn\u0027t informative in case of missing pull secret\n1972702 - [4.8.0] Domain dummy.com (not belonging to Red Hat) is being used in a default configuration\n1972768 - kube-apiserver setup fail while installing SNO due to port being used\n1972864 - New `local-with-fallback` service annotation does not preserve source IP\n1973018 - Ironic rhcos downloader breaks image cache in upgrade process from 4.7 to 4.8\n1973117 - No storage class is installed, OCS and CNV installations fail\n1973233 - remove kubevirt images and references\n1973237 - RHCOS-shipped stalld systemd units do not use SCHED_FIFO to run stalld. \n1973428 - Placeholder bug for OCP 4.8.0 image release\n1973667 - [4.8] NetworkPolicy tests were mistakenly marked skipped\n1973672 - fix ovn-kubernetes NetworkPolicy 4.7-\u003e4.8 upgrade issue\n1973995 - [Feature:IPv6DualStack] tests are failing in dualstack\n1974414 - Uninstalling kube-descheduler clusterkubedescheduleroperator.4.6.0-202106010807.p0.git.5db84c5 removes some clusterrolebindings\n1974447 - Requirements for nvidia GPU driver container for driver toolkit\n1974677 - [4.8.0] KubeAPI CVO progress is not available on CR/conditions only in events. \n1974718 - Tuned net plugin fails to handle net devices with n/a value for a channel\n1974743 - [4.8.0] All resources not being cleaned up after clusterdeployment deletion\n1974746 - [4.8.0] File system usage not being logged appropriately\n1974757 - [4.8.0] Assisted-service deployed on an IPv6 cluster installed with proxy: agentclusterinstall shows error pulling an image from quay. \n1974773 - Using bound SA tokens causes fail to query cluster resource especially in a sts cluster\n1974839 - CVE-2021-29059 nodejs-is-svg: Regular expression denial of service if the application is provided and checks a crafted invalid SVG string\n1974850 - [4.8] coreos-installer failing Execshield\n1974931 - [4.8.0] Assisted Service Operator should be Infrastructure Operator for Red Hat OpenShift\n1974978 - 4.8.0.rc0 upgrade hung, stuck on DNS clusteroperator progressing\n1975155 - Kubernetes service IP cannot be accessed for rhel worker\n1975227 - [4.8.0] KubeAPI Move conditions consts to CRD types\n1975360 - [4.8.0] [master] timeout on kubeAPI subsystem test: SNO full install and validate MetaData\n1975404 - [4.8.0] Confusing behavior when multi-node spoke workers present when only controlPlaneAgents specified\n1975432 - Alert InstallPlanStepAppliedWithWarnings does not resolve\n1975527 - VMware UPI is configuring static IPs via ignition rather than afterburn\n1975672 - [4.8.0] Production logs are spammed on \"Found unpreparing host: id 08f22447-2cf1-a107-eedf-12c7421f7380 status insufficient\"\n1975789 - worker nodes rebooted when we simulate a case where the api-server is down\n1975938 - gcp-realtime: e2e test failing [sig-storage] Multi-AZ Cluster Volumes should only be allowed to provision PDs in zones where nodes exist [Suite:openshift/conformance/parallel] [Suite:k8s]\n1975964 - 4.7 nightly upgrade to 4.8 and then downgrade back to 4.7 nightly doesn\u0027t work - ingresscontroller \"default\" is degraded\n1976079 - [4.8.0] Openshift Installer| UEFI mode | BM hosts have BIOS halted\n1976263 - [sig-cli] oc adm must-gather runs successfully for audit logs [Suite:openshift/conformance/parallel]\n1976376 - disable jenkins client plugin test whose Jenkinsfile references master branch openshift/origin artifacts\n1976590 - [Tracker] [SNO][assisted-operator][nmstate] Bond Interface is down when booting from the discovery ISO\n1977233 - [4.8] Unable to authenticate against IDP after upgrade to 4.8-rc.1\n1977351 - CVO pod skipped by workload partitioning with incorrect error stating cluster is not SNO\n1977352 - [4.8.0] [SNO] No DNS to cluster API from assisted-installer-controller\n1977426 - Installation of OCP 4.6.13 fails when teaming interface is used with OVNKubernetes\n1977479 - CI failing on firing CertifiedOperatorsCatalogError due to slow livenessProbe responses\n1977540 - sriov webhook not worked when upgrade from 4.7 to 4.8\n1977607 - [4.8.0] Post making changes to AgentServiceConfig assisted-service operator is not detecting the change and redeploying assisted-service pod\n1977924 - Pod fails to run when a custom SCC with a specific set of volumes is used\n1980788 - NTO-shipped stalld can segfault\n1981633 - enhance service-ca injection\n1982250 - Performance Addon Operator fails to install after catalog source becomes ready\n1982252 - olm Operator is in CrashLoopBackOff state with error \"couldn\u0027t cleanup cross-namespace ownerreferences\"\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2016-2183\nhttps://access.redhat.com/security/cve/CVE-2020-7774\nhttps://access.redhat.com/security/cve/CVE-2020-15106\nhttps://access.redhat.com/security/cve/CVE-2020-15112\nhttps://access.redhat.com/security/cve/CVE-2020-15113\nhttps://access.redhat.com/security/cve/CVE-2020-15114\nhttps://access.redhat.com/security/cve/CVE-2020-15136\nhttps://access.redhat.com/security/cve/CVE-2020-26160\nhttps://access.redhat.com/security/cve/CVE-2020-26541\nhttps://access.redhat.com/security/cve/CVE-2020-28469\nhttps://access.redhat.com/security/cve/CVE-2020-28500\nhttps://access.redhat.com/security/cve/CVE-2020-28852\nhttps://access.redhat.com/security/cve/CVE-2021-3114\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3636\nhttps://access.redhat.com/security/cve/CVE-2021-20206\nhttps://access.redhat.com/security/cve/CVE-2021-20271\nhttps://access.redhat.com/security/cve/CVE-2021-20291\nhttps://access.redhat.com/security/cve/CVE-2021-21419\nhttps://access.redhat.com/security/cve/CVE-2021-21623\nhttps://access.redhat.com/security/cve/CVE-2021-21639\nhttps://access.redhat.com/security/cve/CVE-2021-21640\nhttps://access.redhat.com/security/cve/CVE-2021-21648\nhttps://access.redhat.com/security/cve/CVE-2021-22133\nhttps://access.redhat.com/security/cve/CVE-2021-23337\nhttps://access.redhat.com/security/cve/CVE-2021-23362\nhttps://access.redhat.com/security/cve/CVE-2021-23368\nhttps://access.redhat.com/security/cve/CVE-2021-23382\nhttps://access.redhat.com/security/cve/CVE-2021-25735\nhttps://access.redhat.com/security/cve/CVE-2021-25737\nhttps://access.redhat.com/security/cve/CVE-2021-26539\nhttps://access.redhat.com/security/cve/CVE-2021-26540\nhttps://access.redhat.com/security/cve/CVE-2021-27292\nhttps://access.redhat.com/security/cve/CVE-2021-28092\nhttps://access.redhat.com/security/cve/CVE-2021-29059\nhttps://access.redhat.com/security/cve/CVE-2021-29622\nhttps://access.redhat.com/security/cve/CVE-2021-32399\nhttps://access.redhat.com/security/cve/CVE-2021-33034\nhttps://access.redhat.com/security/cve/CVE-2021-33194\nhttps://access.redhat.com/security/cve/CVE-2021-33909\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYQCOF9zjgjWX9erEAQjsEg/+NSFQdRcZpqA34LWRtxn+01y2MO0WLroQ\nd4o+3h0ECKYNRFKJe6n7z8MdmPpvV2uNYN0oIwidTESKHkFTReQ6ZolcV/sh7A26\nZ7E+hhpTTObxAL7Xx8nvI7PNffw3CIOZSpnKws5TdrwuMkH5hnBSSZntP5obp9Vs\nImewWWl7CNQtFewtXbcmUojNzIvU1mujES2DTy2ffypLoOW6kYdJzyWubigIoR6h\ngep9HKf1X4oGPuDNF5trSdxKwi6W68+VsOA25qvcNZMFyeTFhZqowot/Jh1HUHD8\nTWVpDPA83uuExi/c8tE8u7VZgakWkRWcJUsIw68VJVOYGvpP6K/MjTpSuP2itgUX\nX//1RGQM7g6sYTCSwTOIrMAPbYH0IMbGDjcS4fSZcfg6c+WJnEpZ72ZgjHZV8mxb\n1BtQSs2lil48/cwDKM0yMO2nYsKiz4DCCx2W5izP0rLwNA8Hvqh9qlFgkxJWWOvA\nmtBCelB0E74qrE4NXbX+MIF7+ZQKjd1evE91/VWNs0FLR/xXdP3C5ORLU3Fag0G/\n0oTV73NdxP7IXVAdsECwU2AqS9ne1y01zJKtd7hq7H/wtkbasqCNq5J7HikJlLe6\ndpKh5ZRQzYhGeQvho9WQfz/jd4HZZTcB6wxrWubbd05bYt/i/0gau90LpuFEuSDx\n+bLvJlpGiMg=\n=NJcM\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. VDSM manages and monitors the host\u0027s storage, memory and\nnetworks as well as virtual machine creation, other host administration\ntasks, statistics gathering, and log collection. \n\nBug Fix(es):\n\n* An update in libvirt has changed the way block threshold events are\nsubmitted. \nAs a result, the VDSM was confused by the libvirt event, and tried to look\nup a drive, logging a warning about a missing drive. \nIn this release, the VDSM has been adapted to handle the new libvirt\nbehavior, and does not log warnings about missing drives. (BZ#1948177)\n\n* Previously, when a virtual machine was powered off on the source host of\na live migration and the migration finished successfully at the same time,\nthe two events interfered with each other, and sometimes prevented\nmigration cleanup resulting in additional migrations from the host being\nblocked. \nIn this release, additional migrations are not blocked. (BZ#1959436)\n\n* Previously, when failing to execute a snapshot and re-executing it later,\nthe second try would fail due to using the previous execution data. In this\nrelease, this data will be used only when needed, in recovery mode. \n(BZ#1984209)\n\n4. Then engine deletes the volume and causes data corruption. \n1998017 - Keep cinbderlib dependencies optional for 4.4.8\n\n6. \n\nBug Fix(es):\n\n* Documentation is referencing deprecated API for Service Export -\nSubmariner (BZ#1936528)\n\n* Importing of cluster fails due to error/typo in generated command\n(BZ#1936642)\n\n* RHACM 2.2.2 images (BZ#1938215)\n\n* 2.2 clusterlifecycle fails to allow provision `fips: true` clusters on\naws, vsphere (BZ#1941778)\n\n3. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.4 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API", "sources": [ { "db": "NVD", "id": "CVE-2020-28500" }, { "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "db": "VULHUB", "id": "VHN-373964" }, { "db": "VULMON", "id": "CVE-2020-28500" }, { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" } ], "trust": 2.43 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-28500", "trust": 4.1 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.8 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 1.5 }, { "db": "PACKETSTORM", "id": "163276", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162151", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "162901", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU99475301", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-011490", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "163690", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "164090", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2021.1225", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.1871", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5790", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3036", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2232", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2182", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2555", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.2657", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4568", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2555", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022052615", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021090922", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021062702", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202102-1168", "trust": 0.6 }, { "db": "VULHUB", "id": "VHN-373964", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2020-28500", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168352", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-373964" }, { "db": "VULMON", "id": "CVE-2020-28500" }, { "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "NVD", "id": "CVE-2020-28500" }, { "db": "CNNVD", "id": "CNNVD-202102-1168" } ] }, "id": "VAR-202102-1492", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-373964" } ], "trust": 0.30766129 }, "last_update_date": "2023-12-18T11:50:50.527000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "perf", "trust": 0.8, "url": "https://github.com/lodash/lodash/pull/5065" }, { "title": "lodash Security vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=142393" }, { "title": "Debian CVElist Bug Report Logs: CVE-2021-23337 CVE-2020-28500", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=705b23b69122ed473c796891371a9f52" }, { "title": "IBM: Security Bulletin: IBM Integration Bus \u0026 IBM App Connect Enterprise V11 are affected by vulnerabilities in Node.js (CVE-2020-28500)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=3d9a3b6c21f9e87c491e9c1a56004595" }, { "title": "IBM: Security Bulletin: A security vulnerability in Node.js Lodash module affects IBM Cloud Automation Manager.", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ab2b9d02254c2d45625dc8b682d0c4eb" }, { "title": "Red Hat: Important: Migration Toolkit for Containers (MTC) 1.7.4 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226429 - security advisory" }, { "title": "tsp-vulnerable-app-nodejs-express", "trust": 0.1, "url": "https://github.com/the-scan-project/tsp-vulnerable-app-nodejs-express " }, { "title": "sample-vulnerable-app-nodejs-express", "trust": 0.1, "url": "https://github.com/samoylenko/sample-vulnerable-app-nodejs-express " }, { "title": "lm-test", "trust": 0.1, "url": "https://github.com/mishakav/lm-test " } ], "sources": [ { "db": "VULMON", "id": "CVE-2020-28500" }, { "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "db": "CNNVD", "id": "CNNVD-202102-1168" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "NVD-CWE-Other", "trust": 1.0 }, { "problemtype": "others (CWE-Other) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "db": "NVD", "id": "CVE-2020-28500" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.6, "url": "https://snyk.io/vuln/snyk-java-orgfujionwebjars-1074896" }, { "trust": 2.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 1.8, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.8, "url": "https://security.netapp.com/advisory/ntap-20210312-0006/" }, { "trust": 1.8, "url": "https://github.com/lodash/lodash/blob/npm/trimend.js%23l8" }, { "trust": 1.8, "url": "https://github.com/lodash/lodash/pull/5065" }, { "trust": 1.8, "url": "https://snyk.io/vuln/snyk-java-orgwebjars-1074894" }, { "trust": 1.8, "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbower-1074892" }, { "trust": 1.8, "url": "https://snyk.io/vuln/snyk-java-orgwebjarsbowergithublodash-1074895" }, { "trust": 1.8, "url": "https://snyk.io/vuln/snyk-java-orgwebjarsnpm-1074893" }, { "trust": 1.8, "url": "https://snyk.io/vuln/snyk-js-lodash-1018905" }, { "trust": 1.8, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.8, "url": "https://www.oracle.com/security-alerts/cpujan2022.html" }, { "trust": 1.8, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.8, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 0.9, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.8, "url": "http://jvn.jp/vu/jvnvu99475301/index.html" }, { "trust": 0.7, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-integration-bus-ibm-app-connect-enterprise-v11-are-affected-by-vulnerabilities-in-node-js-cve-2020-28500/" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.7, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.7, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.7, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.7, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-watson-discovery-for-ibm-cloud-pak-for-data-affected-by-vulnerability-in-node-js-3/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2657" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1225" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162901/red-hat-security-advisory-2021-2179-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-security-guardium-insights-is-affected-by-multiple-vulnerabilities-5/" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6486341" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163747/red-hat-security-advisory-2021-3016-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-automation-manager-2/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164090/red-hat-security-advisory-2021-3459-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.1871" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3036" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021090922" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163276/red-hat-security-advisory-2021-2543-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2555" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022052615" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6524656" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6483681" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/162151/red-hat-security-advisory-2021-1168-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021062702" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2232" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/163690/red-hat-security-advisory-2021-2438-01.html" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-a-security-vulnerability-in-node-js-lodash-module-affects-ibm-cloud-pak-for-multicloud-management-managed-service/" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-potential-vulnerability-with-node-js-lodash-module-2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.2555" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2182" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5790" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/lodash-denial-of-service-via-tonumber-trim-36225" }, { "trust": 0.6, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-cloud-pak-for-integration-is-vulnerable-to-node-js-lodash-vulnerability-cve-2020-28500/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4568" }, { "trust": 0.4, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23337" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3114" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/2974891" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/.html" }, { "trust": 0.1, "url": "https://github.com/the-scan-project/tsp-vulnerable-app-nodejs-express" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26116" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8284" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23336" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20305" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13949" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8285" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8286" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/jaeger/jaeger_install/rhb" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26116" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-3842" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8927" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13776" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29363" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27619" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2543" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24977" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-3842" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13776" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23336" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3177" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13949" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8231" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27619" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24977" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/ht" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2179" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/technical_notes" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21419" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15112" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25737" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21639" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20291" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26540" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21419" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33194" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26539" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15106" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29059" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-2183" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26160" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21623" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2438" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15112" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20206" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25735" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20206" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22133" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15113" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21640" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26160" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21640" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2437" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15136" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23382" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21639" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21648" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15106" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15136" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29622" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-rel" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20291" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15113" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15114" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22133" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20271" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-2183" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15114" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3636" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3459" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:1168" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29529" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27363" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29529" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3347" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3449" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28374" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27364" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26708" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27365" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0466" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27152" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27363" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21322" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27152" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3347" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3450" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14040" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27365" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0466" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27364" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14040" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28374" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-26708" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8559" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30629" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2526" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0691" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0686" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32206" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-32208" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16845" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0639" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6429" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-30631" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16845" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0512" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15586" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13435" } ], "sources": [ { "db": "VULHUB", "id": "VHN-373964" }, { "db": "VULMON", "id": "CVE-2020-28500" }, { "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "NVD", "id": "CVE-2020-28500" }, { "db": "CNNVD", "id": "CNNVD-202102-1168" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-373964" }, { "db": "VULMON", "id": "CVE-2020-28500" }, { "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "db": "PACKETSTORM", "id": "163276" }, { "db": "PACKETSTORM", "id": "162901" }, { "db": "PACKETSTORM", "id": "163690" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "164090" }, { "db": "PACKETSTORM", "id": "162151" }, { "db": "PACKETSTORM", "id": "168352" }, { "db": "NVD", "id": "CVE-2020-28500" }, { "db": "CNNVD", "id": "CNNVD-202102-1168" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-02-15T00:00:00", "db": "VULHUB", "id": "VHN-373964" }, { "date": "2021-02-15T00:00:00", "db": "VULMON", "id": "CVE-2020-28500" }, { "date": "2021-04-05T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "date": "2021-06-24T17:54:53", "db": "PACKETSTORM", "id": "163276" }, { "date": "2021-06-01T15:17:45", "db": "PACKETSTORM", "id": "162901" }, { "date": "2021-07-28T14:53:49", "db": "PACKETSTORM", "id": "163690" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2021-09-09T13:33:33", "db": "PACKETSTORM", "id": "164090" }, { "date": "2021-04-13T15:38:30", "db": "PACKETSTORM", "id": "162151" }, { "date": "2022-09-13T15:42:14", "db": "PACKETSTORM", "id": "168352" }, { "date": "2021-02-15T11:15:12.397000", "db": "NVD", "id": "CVE-2020-28500" }, { "date": "2021-02-15T00:00:00", "db": "CNNVD", "id": "CNNVD-202102-1168" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-09-13T00:00:00", "db": "VULHUB", "id": "VHN-373964" }, { "date": "2022-09-13T00:00:00", "db": "VULMON", "id": "CVE-2020-28500" }, { "date": "2022-09-20T05:44:00", "db": "JVNDB", "id": "JVNDB-2020-011490" }, { "date": "2022-09-13T21:18:50.543000", "db": "NVD", "id": "CVE-2020-28500" }, { "date": "2022-11-11T00:00:00", "db": "CNNVD", "id": "CNNVD-202102-1168" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "163690" }, { "db": "CNNVD", "id": "CNNVD-202102-1168" } ], "trust": 0.7 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Lodash\u00a0 Vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011490" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202102-1168" } ], "trust": 0.6 } }
var-202105-1325
Vulnerability from variot
In ISC DHCP 4.1-ESV-R1 -> 4.1-ESV-R16, ISC DHCP 4.4.0 -> 4.4.2 (Other branches of ISC DHCP (i.e., releases in the 4.0.x series or lower and releases in the 4.3.x series) are beyond their End-of-Life (EOL) and no longer supported by ISC. From inspection it is clear that the defect is also present in releases from those series, but they have not been officially tested for the vulnerability), The outcome of encountering the defect while reading a lease that will trigger it varies, according to: the component being affected (i.e., dhclient or dhcpd) whether the package was built as a 32-bit or 64-bit binary whether the compiler flag -fstack-protection-strong was used when compiling In dhclient, ISC has not successfully reproduced the error on a 64-bit system. However, on a 32-bit system it is possible to cause dhclient to crash when reading an improper lease, which could cause network connectivity problems for an affected system due to the absence of a running DHCP client process. In dhcpd, when run in DHCPv4 or DHCPv6 mode: if the dhcpd server binary was built for a 32-bit architecture AND the -fstack-protection-strong flag was specified to the compiler, dhcpd may exit while parsing a lease file containing an objectionable lease, resulting in lack of service to clients. Additionally, the offending lease and the lease immediately following it in the lease database may be improperly deleted. if the dhcpd server binary was built for a 64-bit architecture OR if the -fstack-protection-strong compiler flag was NOT specified, the crash will not occur, but it is possible for the offending lease and the lease which immediately followed it to be improperly deleted. There is a discrepancy between the code that handles encapsulated option information in leases transmitted "on the wire" and the code which reads and parses lease information after it has been written to disk storage. The highest threat from this vulnerability is to data confidentiality and integrity as well as service availability. (CVE-2021-25217). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: dhcp security update Advisory ID: RHSA-2021:2469-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:2469 Issue date: 2021-06-17 CVE Names: CVE-2021-25217 =====================================================================
- Summary:
An update for dhcp is now available for Red Hat Enterprise Linux 7.6 Advanced Update Support, Red Hat Enterprise Linux 7.6 Telco Extended Update Support, and Red Hat Enterprise Linux 7.6 Update Services for SAP Solutions.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Server AUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server E4S (v. 7.6) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional AUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64 Red Hat Enterprise Linux Server Optional TUS (v. 7.6) - x86_64 Red Hat Enterprise Linux Server TUS (v. 7.6) - x86_64
- Description:
The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. The dhcp packages provide a relay agent and ISC DHCP service required to enable and administer DHCP on a network.
Security Fix(es):
- dhcp: stack-based buffer overflow when parsing statements with colon-separated hex digits in config or lease files in dhcpd and dhclient (CVE-2021-25217)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1963258 - CVE-2021-25217 dhcp: stack-based buffer overflow when parsing statements with colon-separated hex digits in config or lease files in dhcpd and dhclient
- Package List:
Red Hat Enterprise Linux Server AUS (v. 7.6):
Source: dhcp-4.2.5-69.el7_6.1.src.rpm
x86_64: dhclient-4.2.5-69.el7_6.1.x86_64.rpm dhcp-4.2.5-69.el7_6.1.x86_64.rpm dhcp-common-4.2.5-69.el7_6.1.x86_64.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-libs-4.2.5-69.el7_6.1.i686.rpm dhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server E4S (v. 7.6):
Source: dhcp-4.2.5-69.el7_6.1.src.rpm
ppc64le: dhclient-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-common-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-libs-4.2.5-69.el7_6.1.ppc64le.rpm
x86_64: dhclient-4.2.5-69.el7_6.1.x86_64.rpm dhcp-4.2.5-69.el7_6.1.x86_64.rpm dhcp-common-4.2.5-69.el7_6.1.x86_64.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-libs-4.2.5-69.el7_6.1.i686.rpm dhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server TUS (v. 7.6):
Source: dhcp-4.2.5-69.el7_6.1.src.rpm
x86_64: dhclient-4.2.5-69.el7_6.1.x86_64.rpm dhcp-4.2.5-69.el7_6.1.x86_64.rpm dhcp-common-4.2.5-69.el7_6.1.x86_64.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-libs-4.2.5-69.el7_6.1.i686.rpm dhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional AUS (v. 7.6):
x86_64: dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-devel-4.2.5-69.el7_6.1.i686.rpm dhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional E4S (v. 7.6):
ppc64le: dhcp-debuginfo-4.2.5-69.el7_6.1.ppc64le.rpm dhcp-devel-4.2.5-69.el7_6.1.ppc64le.rpm
x86_64: dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-devel-4.2.5-69.el7_6.1.i686.rpm dhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm
Red Hat Enterprise Linux Server Optional TUS (v. 7.6):
x86_64: dhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm dhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm dhcp-devel-4.2.5-69.el7_6.1.i686.rpm dhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-25217 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYMs0KtzjgjWX9erEAQis7xAAhh3MBohMBq6bZd6sPasNG4rPX+Xh5AWf D+6WNTQLV1u1IU4ZzGKVMtBNSfCd8m727z/L0d4wBof06ngUXHkdR4AEzn5uuWSz lHzlgbpmvqxeBnXrHOG1WE43JNXHSsj0u8eARsLxEU4/rxnbLVOj5dMJkdWmXN61 DocHHFVw6GmdZSCr6/tLjvG57fWtVLQF4SpEdhXz55iNZ1l6y09FDtoom/FuXIcG VnsUpsu/iWMFaUaVQH3sFVLksl39IrHFQxvskXR+FHAPzb8vVuKyNihJ5b3BUhfh jTUKPxLO+X0/K9+cNFVSuSTPr7eHpRRHdUbFIHcUB0s1ACOnmvHr6G8FaVAi9BQZ 6hzWcOFOZS7fF4TnXF3q0yDAKApRwlyF1PP21u1XdCb17Z4+E2LZF0nqnbb3hCxV JfnsadNc2Re/gc3u1bOGQb56ylc7LC74BeMDoJSeldqdPeT5JUc8XRRCyWHjVcjD Bj1kD90FbD3Z3jRAvASgKg4KU1xqEZidHyL/qHo9YTS0h9lqc2iWb0n3/4RU0E8k OuNPpWxkzt1uGQl3iJbQH4TOsIQtqoDFOaCaPMbol44fnm69Q52zRBBr6AHVhEcY iOpTa2PUFK3FLfhkfUCHcCRVXqXeewefcODTWs2Jwx6/sl7nsZpWMNlV8+rdUmXR BuvubM0bUt8= =mdD7 -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 6 ELS) - i386, s390x, x86_64
- These packages include redhat-release-virtualization-host. RHVH features a Cockpit user interface for monitoring the host's resources and performing administrative tasks. Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
All OpenShift Container Platform 4.7 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -between-minor.html#understanding-upgrade-channels_updating-cluster-between - -minor
- Solution:
For OpenShift Container Platform 4.7 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel ease-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.7/updating/updating-cluster - -cli.html
- ========================================================================= Ubuntu Security Notice USN-4969-2 May 27, 2021
isc-dhcp vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 16.04 ESM
- Ubuntu 14.04 ESM
Summary:
DHCP could be made to crash if it received specially crafted network traffic.
Software Description: - isc-dhcp: DHCP server and client
Details:
USN-4969-1 fixed a vulnerability in DHCP. This update provides the corresponding update for Ubuntu 14.04 ESM and 16.04 ESM.
Original advisory details:
Jon Franklin and Pawel Wieczorkiewicz discovered that DHCP incorrectly handled lease file parsing. A remote attacker could possibly use this issue to cause DHCP to crash, resulting in a denial of service.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 16.04 ESM: isc-dhcp-client 4.3.3-5ubuntu12.10+esm1 isc-dhcp-server 4.3.3-5ubuntu12.10+esm1
Ubuntu 14.04 ESM: isc-dhcp-client 4.2.4-7ubuntu12.13+esm1 isc-dhcp-server 4.2.4-7ubuntu12.13+esm1
In general, a standard system update will make all the necessary changes. 7.7) - ppc64, ppc64le, s390x, x86_64
-
8) - aarch64, noarch, ppc64le, s390x, x86_64
-
Description:
Red Hat Advanced Cluster Management for Kubernetes 2.3.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.3/html/release_notes/
Security:
-
fastify-reply-from: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21321)
-
fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service (CVE-2021-21322)
-
nodejs-netmask: improper input validation of octal input data (CVE-2021-28918)
-
redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)
-
redis: Integer overflow via COPY command for large intsets (CVE-2021-29478)
-
nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)
-
nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions (CVE-2020-28500)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing
-
-u- extension (CVE-2020-28851)
-
golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag (CVE-2020-28852)
-
nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)
-
oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)
-
redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms (CVE-2021-21309)
-
nodejs-lodash: command injection via template (CVE-2021-23337)
-
nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() (CVE-2021-23362)
-
browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) (CVE-2021-23364)
-
nodejs-postcss: Regular expression denial of service during source map parsing (CVE-2021-23368)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option (CVE-2021-23369)
-
nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js (CVE-2021-23382)
-
nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option (CVE-2021-23383)
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
-
nodejs-ua-parser-js: ReDoS via malicious User-Agent header (CVE-2021-27292)
-
grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call (CVE-2021-27358)
-
nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)
-
nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character (CVE-2021-29418)
-
ulikunitz/xz: Infinite loop in readUvarint allows for denial of service (CVE-2021-29482)
-
normalize-url: ReDoS for data URLs (CVE-2021-33502)
-
nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)
-
nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe (CVE-2021-23343)
-
html-parse-stringify: Regular Expression DoS (CVE-2021-23346)
-
openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)
For more details about the security issues, including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE pages listed in the References section.
Bugs:
-
RFE Make the source code for the endpoint-metrics-operator public (BZ# 1913444)
-
cluster became offline after apiserver health check (BZ# 1942589)
-
Bugs fixed (https://bugzilla.redhat.com/):
1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension 1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag 1913444 - RFE Make the source code for the endpoint-metrics-operator public 1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull 1927520 - RHACM 2.3.0 images 1928937 - CVE-2021-23337 nodejs-lodash: command injection via template 1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions 1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection 1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate 1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms 1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization 1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string 1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application 1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header 1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call 1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS 1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service 1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service 1942589 - cluster became offline after apiserver health check 1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl() 1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character 1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data 1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service 1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing 1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js 1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service 1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS) 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option 1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command 1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets 1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions 1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id 1983131 - Defragmenting an etcd member doesn't reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters
-
Gentoo Linux Security Advisory GLSA 202305-22
https://security.gentoo.org/
Severity: Normal Title: ISC DHCP: Multiple Vulnerabilities Date: May 03, 2023 Bugs: #875521, #792324 ID: 202305-22
Synopsis
Multiple vulnerabilities have been discovered in ISC DHCP, the worst of which could result in denial of service.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/dhcp < 4.4.3_p1 >= 4.4.3_p1
Description
Multiple vulnerabilities have been discovered in ISC DHCP. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All ISC DHCP users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/dhcp-4.4.3_p1"
References
[ 1 ] CVE-2021-25217 https://nvd.nist.gov/vuln/detail/CVE-2021-25217 [ 2 ] CVE-2022-2928 https://nvd.nist.gov/vuln/detail/CVE-2022-2928 [ 3 ] CVE-2022-2929 https://nvd.nist.gov/vuln/detail/CVE-2022-2929
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202305-22
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2023 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202105-1325", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "dhcp", "scope": "eq", "trust": 1.0, "vendor": "isc", "version": "4.1-esv" }, { "model": "ruggedcom rox rx1500", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx1511", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "ruggedcom rox rx1400", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx1536", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx5000", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "ruggedcom rox rx1512", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx5000", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "dhcp", "scope": "lte", "trust": 1.0, "vendor": "isc", "version": "4.4.2" }, { "model": "ruggedcom rox rx1524", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx1501", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "ruggedcom rox rx1501", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox rx1510", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ruggedcom rox mx5000", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "ruggedcom rox rx1512", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "ruggedcom rox mx5000", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "ontap select deploy administration utility", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire \\\u0026 hci management node", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ruggedcom rox rx1510", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "ruggedcom rox rx1500", "scope": "gte", "trust": 1.0, "vendor": "siemens", "version": "2.3.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "33" }, { "model": "ruggedcom rox rx1511", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "2.15.0" }, { "model": "dhcp", "scope": "gte", "trust": 1.0, "vendor": "isc", "version": "4.4.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-25217" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11_rc1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11_rc2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10_rc1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12_p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r13:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r13_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r14:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r14_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r15:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r10rc1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11rc1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r11rc2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12-p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r12b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r13b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r14b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r16:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "4.4.2", "versionStartIncluding": "4.4.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r15-p1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:dhcp:4.1-esv:r15_b1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1400_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1500_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1500:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1501_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1501:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1510_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1510:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1511_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1511:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1512_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1512:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1524_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1524:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx1536_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx1536:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_rx5000_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_rx5000:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:siemens:ruggedcom_rox_mx5000_firmware:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "2.15.0", "versionStartIncluding": "2.3.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:siemens:ruggedcom_rox_mx5000:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:ontap_select_deploy_administration_utility:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-25217" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" } ], "trust": 0.9 }, "cve": "CVE-2021-25217", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "ADJACENT_NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 3.3, "confidentialityImpact": "NONE", "exploitabilityScore": 6.5, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "LOW", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:A/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "LOW", "accessVector": "ADJACENT_NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "PARTIAL", "baseScore": 3.3, "confidentialityImpact": "NONE", "exploitabilityScore": 6.5, "id": "CVE-2021-25217", "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "LOW", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:A/AC:L/Au:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "ADJACENT_NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.4, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 2.8, "impactScore": 4.0, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "CHANGED", "trust": 2.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:C/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-25217", "trust": 1.0, "value": "HIGH" }, { "author": "security-officer@isc.org", "id": "CVE-2021-25217", "trust": 1.0, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2021-25217", "trust": 0.1, "value": "LOW" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "NVD", "id": "CVE-2021-25217" }, { "db": "NVD", "id": "CVE-2021-25217" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "In ISC DHCP 4.1-ESV-R1 -\u003e 4.1-ESV-R16, ISC DHCP 4.4.0 -\u003e 4.4.2 (Other branches of ISC DHCP (i.e., releases in the 4.0.x series or lower and releases in the 4.3.x series) are beyond their End-of-Life (EOL) and no longer supported by ISC. From inspection it is clear that the defect is also present in releases from those series, but they have not been officially tested for the vulnerability), The outcome of encountering the defect while reading a lease that will trigger it varies, according to: the component being affected (i.e., dhclient or dhcpd) whether the package was built as a 32-bit or 64-bit binary whether the compiler flag -fstack-protection-strong was used when compiling In dhclient, ISC has not successfully reproduced the error on a 64-bit system. However, on a 32-bit system it is possible to cause dhclient to crash when reading an improper lease, which could cause network connectivity problems for an affected system due to the absence of a running DHCP client process. In dhcpd, when run in DHCPv4 or DHCPv6 mode: if the dhcpd server binary was built for a 32-bit architecture AND the -fstack-protection-strong flag was specified to the compiler, dhcpd may exit while parsing a lease file containing an objectionable lease, resulting in lack of service to clients. Additionally, the offending lease and the lease immediately following it in the lease database may be improperly deleted. if the dhcpd server binary was built for a 64-bit architecture OR if the -fstack-protection-strong compiler flag was NOT specified, the crash will not occur, but it is possible for the offending lease and the lease which immediately followed it to be improperly deleted. There is a discrepancy between the code that handles encapsulated option information in leases transmitted \"on the wire\" and the code which reads and parses lease information after it has been written to disk storage. The highest threat from this vulnerability is to data confidentiality and integrity as well as service availability. (CVE-2021-25217). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: dhcp security update\nAdvisory ID: RHSA-2021:2469-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:2469\nIssue date: 2021-06-17\nCVE Names: CVE-2021-25217 \n=====================================================================\n\n1. Summary:\n\nAn update for dhcp is now available for Red Hat Enterprise Linux 7.6\nAdvanced Update Support, Red Hat Enterprise Linux 7.6 Telco Extended Update\nSupport, and Red Hat Enterprise Linux 7.6 Update Services for SAP\nSolutions. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Server AUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server E4S (v. 7.6) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional AUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6) - ppc64le, x86_64\nRed Hat Enterprise Linux Server Optional TUS (v. 7.6) - x86_64\nRed Hat Enterprise Linux Server TUS (v. 7.6) - x86_64\n\n3. Description:\n\nThe Dynamic Host Configuration Protocol (DHCP) is a protocol that allows\nindividual devices on an IP network to get their own network configuration\ninformation, including an IP address, a subnet mask, and a broadcast\naddress. The dhcp packages provide a relay agent and ISC DHCP service\nrequired to enable and administer DHCP on a network. \n\nSecurity Fix(es):\n\n* dhcp: stack-based buffer overflow when parsing statements with\ncolon-separated hex digits in config or lease files in dhcpd and dhclient\n(CVE-2021-25217)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963258 - CVE-2021-25217 dhcp: stack-based buffer overflow when parsing statements with colon-separated hex digits in config or lease files in dhcpd and dhclient\n\n6. Package List:\n\nRed Hat Enterprise Linux Server AUS (v. 7.6):\n\nSource:\ndhcp-4.2.5-69.el7_6.1.src.rpm\n\nx86_64:\ndhclient-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-common-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-libs-4.2.5-69.el7_6.1.i686.rpm\ndhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server E4S (v. 7.6):\n\nSource:\ndhcp-4.2.5-69.el7_6.1.src.rpm\n\nppc64le:\ndhclient-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-common-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-libs-4.2.5-69.el7_6.1.ppc64le.rpm\n\nx86_64:\ndhclient-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-common-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-libs-4.2.5-69.el7_6.1.i686.rpm\ndhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server TUS (v. 7.6):\n\nSource:\ndhcp-4.2.5-69.el7_6.1.src.rpm\n\nx86_64:\ndhclient-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-common-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-libs-4.2.5-69.el7_6.1.i686.rpm\ndhcp-libs-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional AUS (v. 7.6):\n\nx86_64:\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-devel-4.2.5-69.el7_6.1.i686.rpm\ndhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional E4S (v. 7.6):\n\nppc64le:\ndhcp-debuginfo-4.2.5-69.el7_6.1.ppc64le.rpm\ndhcp-devel-4.2.5-69.el7_6.1.ppc64le.rpm\n\nx86_64:\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-devel-4.2.5-69.el7_6.1.i686.rpm\ndhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional TUS (v. 7.6):\n\nx86_64:\ndhcp-debuginfo-4.2.5-69.el7_6.1.i686.rpm\ndhcp-debuginfo-4.2.5-69.el7_6.1.x86_64.rpm\ndhcp-devel-4.2.5-69.el7_6.1.i686.rpm\ndhcp-devel-4.2.5-69.el7_6.1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-25217\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYMs0KtzjgjWX9erEAQis7xAAhh3MBohMBq6bZd6sPasNG4rPX+Xh5AWf\nD+6WNTQLV1u1IU4ZzGKVMtBNSfCd8m727z/L0d4wBof06ngUXHkdR4AEzn5uuWSz\nlHzlgbpmvqxeBnXrHOG1WE43JNXHSsj0u8eARsLxEU4/rxnbLVOj5dMJkdWmXN61\nDocHHFVw6GmdZSCr6/tLjvG57fWtVLQF4SpEdhXz55iNZ1l6y09FDtoom/FuXIcG\nVnsUpsu/iWMFaUaVQH3sFVLksl39IrHFQxvskXR+FHAPzb8vVuKyNihJ5b3BUhfh\njTUKPxLO+X0/K9+cNFVSuSTPr7eHpRRHdUbFIHcUB0s1ACOnmvHr6G8FaVAi9BQZ\n6hzWcOFOZS7fF4TnXF3q0yDAKApRwlyF1PP21u1XdCb17Z4+E2LZF0nqnbb3hCxV\nJfnsadNc2Re/gc3u1bOGQb56ylc7LC74BeMDoJSeldqdPeT5JUc8XRRCyWHjVcjD\nBj1kD90FbD3Z3jRAvASgKg4KU1xqEZidHyL/qHo9YTS0h9lqc2iWb0n3/4RU0E8k\nOuNPpWxkzt1uGQl3iJbQH4TOsIQtqoDFOaCaPMbol44fnm69Q52zRBBr6AHVhEcY\niOpTa2PUFK3FLfhkfUCHcCRVXqXeewefcODTWs2Jwx6/sl7nsZpWMNlV8+rdUmXR\nBuvubM0bUt8=\n=mdD7\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 6 ELS) - i386, s390x, x86_64\n\n3. \nThese packages include redhat-release-virtualization-host. \nRHVH features a Cockpit user interface for monitoring the host\u0027s resources\nand\nperforming administrative tasks. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nAll OpenShift Container Platform 4.7 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -between-minor.html#understanding-upgrade-channels_updating-cluster-between\n- -minor\n\n4. Solution:\n\nFor OpenShift Container Platform 4.7 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel\nease-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.7/updating/updating-cluster\n- -cli.html\n\n5. =========================================================================\nUbuntu Security Notice USN-4969-2\nMay 27, 2021\n\nisc-dhcp vulnerability\n=========================================================================\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 16.04 ESM\n- Ubuntu 14.04 ESM\n\nSummary:\n\nDHCP could be made to crash if it received specially crafted network\ntraffic. \n\nSoftware Description:\n- isc-dhcp: DHCP server and client\n\nDetails:\n\nUSN-4969-1 fixed a vulnerability in DHCP. This update provides\nthe corresponding update for Ubuntu 14.04 ESM and 16.04 ESM. \n\n\nOriginal advisory details:\n\n Jon Franklin and Pawel Wieczorkiewicz discovered that DHCP incorrectly\n handled lease file parsing. A remote attacker could possibly use this issue\n to cause DHCP to crash, resulting in a denial of service. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 16.04 ESM:\n isc-dhcp-client 4.3.3-5ubuntu12.10+esm1\n isc-dhcp-server 4.3.3-5ubuntu12.10+esm1\n\nUbuntu 14.04 ESM:\n isc-dhcp-client 4.2.4-7ubuntu12.13+esm1\n isc-dhcp-server 4.2.4-7ubuntu12.13+esm1\n\nIn general, a standard system update will make all the necessary changes. 7.7) - ppc64, ppc64le, s390x, x86_64\n\n3. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.3.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.3/html/release_notes/\n\nSecurity:\n\n* fastify-reply-from: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21321)\n\n* fastify-http-proxy: crafted URL allows prefix scape of the proxied\nbackend service (CVE-2021-21322)\n\n* nodejs-netmask: improper input validation of octal input data\n(CVE-2021-28918)\n\n* redis: Integer overflow via STRALGO LCS command (CVE-2021-29477)\n\n* redis: Integer overflow via COPY command for large intsets\n(CVE-2021-29478)\n\n* nodejs-glob-parent: Regular expression denial of service (CVE-2020-28469)\n\n* nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n(CVE-2020-28500)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing\n- -u- extension (CVE-2020-28851)\n\n* golang.org/x/text: Panic in language.ParseAcceptLanguage while processing\nbcp47 tag (CVE-2020-28852)\n\n* nodejs-ansi_up: XSS due to insufficient URL sanitization (CVE-2021-3377)\n\n* oras: zip-slip vulnerability via oras-pull (CVE-2021-21272)\n\n* redis: integer overflow when configurable limit for maximum supported\nbulk input size is too big on 32-bit platforms (CVE-2021-21309)\n\n* nodejs-lodash: command injection via template (CVE-2021-23337)\n\n* nodejs-hosted-git-info: Regular Expression denial of service via\nshortcutMatch in fromUrl() (CVE-2021-23362)\n\n* browserslist: parsing of invalid queries could result in Regular\nExpression Denial of Service (ReDoS) (CVE-2021-23364)\n\n* nodejs-postcss: Regular expression denial of service during source map\nparsing (CVE-2021-23368)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with strict:true option (CVE-2021-23369)\n\n* nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in\nlib/previous-map.js (CVE-2021-23382)\n\n* nodejs-handlebars: Remote code execution when compiling untrusted compile\ntemplates with compat:true option (CVE-2021-23383)\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\n* nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n(CVE-2021-27292)\n\n* grafana: snapshot feature allow an unauthenticated remote attacker to\ntrigger a DoS via a remote API call (CVE-2021-27358)\n\n* nodejs-is-svg: ReDoS via malicious string (CVE-2021-28092)\n\n* nodejs-netmask: incorrectly parses an IP address that has octal integer\nwith invalid character (CVE-2021-29418)\n\n* ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n(CVE-2021-29482)\n\n* normalize-url: ReDoS for data URLs (CVE-2021-33502)\n\n* nodejs-trim-newlines: ReDoS in .end() method (CVE-2021-33623)\n\n* nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n(CVE-2021-23343)\n\n* html-parse-stringify: Regular Expression DoS (CVE-2021-23346)\n\n* openssl: incorrect SSLv2 rollback protection (CVE-2021-23839)\n\nFor more details about the security issues, including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npages listed in the References section. \n\nBugs:\n\n* RFE Make the source code for the endpoint-metrics-operator public (BZ#\n1913444)\n\n* cluster became offline after apiserver health check (BZ# 1942589)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1913333 - CVE-2020-28851 golang.org/x/text: Panic in language.ParseAcceptLanguage while parsing -u- extension\n1913338 - CVE-2020-28852 golang.org/x/text: Panic in language.ParseAcceptLanguage while processing bcp47 tag\n1913444 - RFE Make the source code for the endpoint-metrics-operator public\n1921286 - CVE-2021-21272 oras: zip-slip vulnerability via oras-pull\n1927520 - RHACM 2.3.0 images\n1928937 - CVE-2021-23337 nodejs-lodash: command injection via template\n1928954 - CVE-2020-28500 nodejs-lodash: ReDoS via the toNumber, trim and trimEnd functions\n1930294 - CVE-2021-23839 openssl: incorrect SSLv2 rollback protection\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n1932634 - CVE-2021-21309 redis: integer overflow when configurable limit for maximum supported bulk input size is too big on 32-bit platforms\n1936427 - CVE-2021-3377 nodejs-ansi_up: XSS due to insufficient URL sanitization\n1939103 - CVE-2021-28092 nodejs-is-svg: ReDoS via malicious string\n1940196 - View Resource YAML option shows 404 error when reviewing a Subscription for an application\n1940613 - CVE-2021-27292 nodejs-ua-parser-js: ReDoS via malicious User-Agent header\n1941024 - CVE-2021-27358 grafana: snapshot feature allow an unauthenticated remote attacker to trigger a DoS via a remote API call\n1941675 - CVE-2021-23346 html-parse-stringify: Regular Expression DoS\n1942178 - CVE-2021-21321 fastify-reply-from: crafted URL allows prefix scape of the proxied backend service\n1942182 - CVE-2021-21322 fastify-http-proxy: crafted URL allows prefix scape of the proxied backend service\n1942589 - cluster became offline after apiserver health check\n1943208 - CVE-2021-23362 nodejs-hosted-git-info: Regular Expression denial of service via shortcutMatch in fromUrl()\n1944822 - CVE-2021-29418 nodejs-netmask: incorrectly parses an IP address that has octal integer with invalid character\n1944827 - CVE-2021-28918 nodejs-netmask: improper input validation of octal input data\n1945459 - CVE-2020-28469 nodejs-glob-parent: Regular expression denial of service\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1948763 - CVE-2021-23368 nodejs-postcss: Regular expression denial of service during source map parsing\n1954150 - CVE-2021-23382 nodejs-postcss: ReDoS via getAnnotationURL() and loadAnnotation() in lib/previous-map.js\n1954368 - CVE-2021-29482 ulikunitz/xz: Infinite loop in readUvarint allows for denial of service\n1955619 - CVE-2021-23364 browserslist: parsing of invalid queries could result in Regular Expression Denial of Service (ReDoS)\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1957410 - CVE-2021-29477 redis: Integer overflow via STRALGO LCS command\n1957414 - CVE-2021-29478 redis: Integer overflow via COPY command for large intsets\n1964461 - CVE-2021-33502 normalize-url: ReDoS for data URLs\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1968122 - clusterdeployment fails because hiveadmission sc does not have correct permissions\n1972703 - Subctl fails to join cluster, since it cannot auto-generate a valid cluster id\n1983131 - Defragmenting an etcd member doesn\u0027t reduce the DB size (7.5GB) on a setup with ~1000 spoke clusters\n\n5. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202305-22\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: ISC DHCP: Multiple Vulnerabilities\n Date: May 03, 2023\n Bugs: #875521, #792324\n ID: 202305-22\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in ISC DHCP, the worst of\nwhich could result in denial of service. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/dhcp \u003c 4.4.3_p1 \u003e= 4.4.3_p1\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in ISC DHCP. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll ISC DHCP users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/dhcp-4.4.3_p1\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-25217\n https://nvd.nist.gov/vuln/detail/CVE-2021-25217\n[ 2 ] CVE-2022-2928\n https://nvd.nist.gov/vuln/detail/CVE-2022-2928\n[ 3 ] CVE-2022-2929\n https://nvd.nist.gov/vuln/detail/CVE-2022-2929\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202305-22\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2023 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2021-25217" }, { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "162841" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "172130" } ], "trust": 1.98 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-25217", "trust": 2.2 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.1 }, { "db": "SIEMENS", "id": "SSA-406691", "trust": 1.1 }, { "db": "OPENWALL", "id": "OSS-SECURITY/2021/05/26/6", "trust": 1.1 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-25217", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163196", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163151", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163240", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163400", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162841", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163129", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163137", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163140", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163052", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "163747", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "172130", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "162841" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "172130" }, { "db": "NVD", "id": "CVE-2021-25217" } ] }, "id": "VAR-202105-1325", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.366531175 }, "last_update_date": "2024-07-23T20:55:14.082000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Debian CVElist Bug Report Logs: isc-dhcp: CVE-2021-25217: A buffer overrun in lease file parsing code can be used to exploit a common vulnerability shared by dhcpd and dhclient", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_cvelist_bugreportlogs\u0026qid=b55bb445f71f0d88702845d3582e2b5c" }, { "title": "Amazon Linux AMI: ALAS-2021-1510", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux_ami\u0026qid=alas-2021-1510" }, { "title": "Amazon Linux 2: ALAS2-2021-1654", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2021-1654" }, { "title": "Red Hat: CVE-2021-25217", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-25217" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-25217 log" }, { "title": "Palo Alto Networks Security Advisory: PAN-SA-2024-0001 Informational Bulletin: Impact of OSS CVEs in PAN-OS", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=palo_alto_networks_security_advisory\u0026qid=34f98e4f4344c97599fe2d33618956a7" }, { "title": "Completion for lacework", "trust": 0.1, "url": "https://github.com/fbreton/lacework " } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25217" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-119", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2021-25217" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.2, "url": "https://security.gentoo.org/glsa/202305-22" }, { "trust": 1.1, "url": "https://kb.isc.org/docs/cve-2021-25217" }, { "trust": 1.1, "url": "http://www.openwall.com/lists/oss-security/2021/05/26/6" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2021/06/msg00002.html" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-406691.pdf" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20220325-0011/" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/z2lb42jwiv4m4wdnxx5vgip26feywkif/" }, { "trust": 1.1, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5qi4dyc7j4bghew3nh4xhmwthyc36uk4/" }, { "trust": 1.0, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25217" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-25217" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.9, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.8, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.6, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-27219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3560" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/119.html" }, { "trust": 0.1, "url": "https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=989157" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/alas-2021-1510.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2419" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24489" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/2974891" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24489" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27219" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2519" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3560" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2554" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2555" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/updating/updating-cluster" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/release_notes/ocp-4-7-rel" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-4969-1" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-4969-2" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2405" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2418" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2415" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:2359" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28500" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20934" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8286" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28196" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29418" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-15358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33034" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27618" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28092" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33909" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23337" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32399" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27358" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21321" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23368" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8285" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11668" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23364" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21309" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28196" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20934" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3016" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3377" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20271" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-9169" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29362" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28500" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-2708" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21272" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29477" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23346" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29478" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-11668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23839" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21322" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-2708" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23382" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8284" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33910" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2929" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2928" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://security.gentoo.org/" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "162841" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "172130" }, { "db": "NVD", "id": "CVE-2021-25217" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2021-25217" }, { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "162841" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" }, { "db": "PACKETSTORM", "id": "163747" }, { "db": "PACKETSTORM", "id": "172130" }, { "db": "NVD", "id": "CVE-2021-25217" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-05-26T00:00:00", "db": "VULMON", "id": "CVE-2021-25217" }, { "date": "2021-06-17T18:09:00", "db": "PACKETSTORM", "id": "163196" }, { "date": "2021-06-15T15:01:13", "db": "PACKETSTORM", "id": "163151" }, { "date": "2021-06-22T19:32:24", "db": "PACKETSTORM", "id": "163240" }, { "date": "2021-07-06T15:19:09", "db": "PACKETSTORM", "id": "163400" }, { "date": "2021-05-27T13:30:42", "db": "PACKETSTORM", "id": "162841" }, { "date": "2021-06-14T15:49:07", "db": "PACKETSTORM", "id": "163129" }, { "date": "2021-06-15T14:41:42", "db": "PACKETSTORM", "id": "163137" }, { "date": "2021-06-15T14:44:42", "db": "PACKETSTORM", "id": "163140" }, { "date": "2021-06-09T13:43:47", "db": "PACKETSTORM", "id": "163052" }, { "date": "2021-08-06T14:02:37", "db": "PACKETSTORM", "id": "163747" }, { "date": "2023-05-03T15:37:18", "db": "PACKETSTORM", "id": "172130" }, { "date": "2021-05-26T22:15:07.947000", "db": "NVD", "id": "CVE-2021-25217" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2021-25217" }, { "date": "2023-11-07T03:31:24.893000", "db": "NVD", "id": "CVE-2021-25217" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "162841" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat Security Advisory 2021-2469-01", "sources": [ { "db": "PACKETSTORM", "id": "163196" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "overflow", "sources": [ { "db": "PACKETSTORM", "id": "163196" }, { "db": "PACKETSTORM", "id": "163151" }, { "db": "PACKETSTORM", "id": "163240" }, { "db": "PACKETSTORM", "id": "163400" }, { "db": "PACKETSTORM", "id": "163129" }, { "db": "PACKETSTORM", "id": "163137" }, { "db": "PACKETSTORM", "id": "163140" }, { "db": "PACKETSTORM", "id": "163052" } ], "trust": 0.8 } }
var-202108-1941
Vulnerability from variot
axios is vulnerable to Inefficient Regular Expression Complexity. axios Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be in a state. Pillow is a Python-based image processing library. There is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. Relevant releases/architectures:
2.0 - ppc64le, s390x, x86_64
- Solution:
The OpenShift Service Mesh release notes provide information on the features and known issues:
https://docs.openshift.com/container-platform/latest/service_mesh/v2x/servicemesh-release-notes.html
-
Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
-
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: OpenShift Container Platform 4.10.3 security update Advisory ID: RHSA-2022:0056-01 Product: Red Hat OpenShift Enterprise Advisory URL: https://access.redhat.com/errata/RHSA-2022:0056 Issue date: 2022-03-10 CVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 CVE-2022-24407 =====================================================================
- Summary:
Red Hat OpenShift Container Platform release 4.10.3 is now available with updates to packages and images that fix several bugs and add enhancements.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.10.3. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHSA-2022:0055
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Security Fix(es):
- gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation (CVE-2021-3121)
- grafana: Snapshot authentication bypass (CVE-2021-39226)
- golang: net/http: limit growth of header canonicalization cache (CVE-2021-44716)
- nodejs-axios: Regular expression denial of service in trim function (CVE-2021-3749)
- golang: syscall: don't close fd 0 on ForkExec error (CVE-2021-44717)
- grafana: Forward OAuth Identity Token can allow users to access some data sources (CVE-2022-21673)
- grafana: directory traversal vulnerability (CVE-2021-43813)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-x86_64
The image digest is sha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-s390x
The image digest is sha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le
The image digest is sha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c
All OpenShift Container Platform 4.10 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.10 see the following documentation, which will be updated shortly for this release, for moderate instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
1808240 - Always return metrics value for pods under the user's namespace
1815189 - feature flagged UI does not always become available after operator installation
1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters
1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly
1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal
1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered
1878925 - 'oc adm upgrade --to ...' rejects versions which occur only in history, while the cluster-version operator supports history fallback
1880738 - origin e2e test deletes original worker
1882983 - oVirt csi driver should refuse to provision RWX and ROX PV
1886450 - Keepalived router id check not documented for RHV/VMware IPI
1889488 - The metrics endpoint for the Scheduler is not protected by RBAC
1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom
1896474 - Path based routing is broken for some combinations
1897431 - CIDR support for additional network attachment with the bridge CNI plug-in
1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes
1907433 - Excessive logging in image operator
1909906 - The router fails with PANIC error when stats port already in use
1911173 - [MSTR-998] Many charts' legend names show {{}} instead of words
1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting.
1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)
1917893 - [ovirt] install fails: due to terraform error "Cannot attach Virtual Disk: Disk is locked" on vm resource
1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name
1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation
1926522 - oc adm catalog does not clean temporary files
1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes.
1928141 - kube-storage-version-migrator constantly reporting type "Upgradeable" status Unknown
1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it's storageclass is not yet finished, confusing users
1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x
1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade
1937085 - RHV UPI inventory playbook missing guarantee_memory
1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion
1938236 - vsphere-problem-detector does not support overriding log levels via storage CR
1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods
1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer
1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]
1942913 - ThanosSidecarUnhealthy isn't resilient to WAL replays.
1943363 - [ovn] CNO should gracefully terminate ovn-northd
1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17
1948080 - authentication should not set Available=False APIServices_Error with 503s
1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set
1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0
1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer
1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs
1953264 - "remote error: tls: bad certificate" logs in prometheus-operator container
1955300 - Machine config operator reports unavailable for 23m during upgrade
1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set
1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set
1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters
1956496 - Needs SR-IOV Docs Upstream
1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret
1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid
1956964 - upload a boot-source to OpenShift virtualization using the console
1957547 - [RFE]VM name is not auto filled in dev console
1958349 - ovn-controller doesn't release the memory after cluster-density run
1959352 - [scale] failed to get pod annotation: timed out waiting for annotations
1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not
1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]
1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects
1961391 - String updates
1961509 - DHCP daemon pod should have CPU and memory requests set but not limits
1962066 - Edit machine/machineset specs not working
1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent
1963053 - oc whoami --show-console
should show the web console URL, not the server api URL
1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters
1964327 - Support containers with name:tag@digest
1964789 - Send keys and disconnect does not work for VNC console
1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7
1966445 - Unmasking a service doesn't work if it masked using MCO
1966477 - Use GA version in KAS/OAS/OauthAS to avoid: "audit.k8s.io/v1beta1" is deprecated and will be removed in a future release, use "audit.k8s.io/v1" instead
1966521 - kube-proxy's userspace implementation consumes excessive CPU
1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up
1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount
1970218 - MCO writes incorrect file contents if compression field is specified
1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]
1970805 - Cannot create build when docker image url contains dir structure
1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io
1972827 - image registry does not remain available during upgrade
1972962 - Should set the minimum value for the --max-icsp-size
flag of oc adm catalog mirror
1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run
1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established
1976301 - [ci] e2e-azure-upi is permafailing
1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change.
1976674 - CCO didn't set Upgradeable to False when cco mode is configured to Manual on azure platform
1976894 - Unidling a StatefulSet does not work as expected
1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases
1977414 - Build Config timed out waiting for condition 400: Bad Request
1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus
1978528 - systemd-coredump started and failed intermittently for unknown reasons
1978581 - machine-config-operator: remove runlevel from mco namespace
1979562 - Cluster operators: don't show messages when neither progressing, degraded or unavailable
1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9
1979966 - OCP builds always fail when run on RHEL7 nodes
1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading
1981549 - Machine-config daemon does not recover from broken Proxy configuration
1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]
1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues
1982063 - 'Control Plane' is not translated in Simplified Chinese language in Home->Overview page
1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands
1982662 - Workloads - DaemonSets - Add storage: i18n misses
1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE "/secrets/encryption-config" on single node clusters
1983758 - upgrades are failing on disruptive tests
1983964 - Need Device plugin configuration for the NIC "needVhostNet" & "isRdma"
1984592 - global pull secret not working in OCP4.7.4+ for additional private registries
1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs
1985486 - Cluster Proxy not used during installation on OSP with Kuryr
1985724 - VM Details Page missing translations
1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted
1985933 - Downstream image registry recommendation
1985965 - oVirt CSI driver does not report volume stats
1986216 - [scale] SNO: Slow Pod recovery due to "timed out waiting for OVS port binding"
1986237 - "MachineNotYetDeleted" in Pending state , alert not fired
1986239 - crictl create fails with "PID namespace requested, but sandbox infra container invalid"
1986302 - console continues to fetch prometheus alert and silences for normal user
1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI
1986338 - error creating list of resources in Import YAML
1986502 - yaml multi file dnd duplicates previous dragged files
1986819 - fix string typos for hot-plug disks
1987044 - [OCPV48] Shutoff VM is being shown as "Starting" in WebUI when using spec.runStrategy Manual/RerunOnFailure
1987136 - Declare operatorframework.io/arch. labels for all operators
1987257 - Go-http-client user-agent being used for oc adm mirror requests
1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold
1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP
1988406 - SSH key dropped when selecting "Customize virtual machine" in UI
1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade
1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with "Unable to connect to the server"
1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs
1989438 - expected replicas is wrong
1989502 - Developer Catalog is disappearing after short time
1989843 - 'More' and 'Show Less' functions are not translated on several page
1990014 - oc debug Upgradeable: false
when HA workload is incorrectly spread
1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole
1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN
1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down
1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page
1996647 - Provide more useful degraded message in auth operator on DNS errors
1996736 - Large number of 501 lr-policies in INCI2 env
1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes
1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP
1996928 - Enable default operator indexes on ARM
1997028 - prometheus-operator update removes env var support for thanos-sidecar
1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used
1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller.
1997245 - "Subscription already exists in openshift-storage namespace" error message is seen while installing odf-operator via UI
1997269 - Have to refresh console to install kube-descheduler
1997478 - Storage operator is not available after reboot cluster instances
1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
1997967 - storageClass is not reserved from default wizard to customize wizard
1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order
1998038 - [e2e][automation] add tests for UI for VM disk hot-plug
1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus
1998174 - Create storageclass gp3-csi after install ocp cluster on aws
1998183 - "r: Bad Gateway" info is improper
1998235 - Firefox warning: Cookie “csrf-token” will be soon rejected
1998377 - Filesystem table head is not full displayed in disk tab
1998378 - Virtual Machine is 'Not available' in Home -> Overview -> Cluster inventory
1998519 - Add fstype when create localvolumeset instance on web console
1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses
1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page
1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable
1999091 - Console update toast notification can appear multiple times
1999133 - removing and recreating static pod manifest leaves pod in error state
1999246 - .indexignore is not ingore when oc command load dc configuration
1999250 - ArgoCD in GitOps operator can't manage namespaces
1999255 - ovnkube-node always crashes out the first time it starts
1999261 - ovnkube-node log spam (and security token leak?)
1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -> Operator Installation page
1999314 - console-operator is slow to mark Degraded as False once console starts working
1999425 - kube-apiserver with "[SHOULD NOT HAPPEN] failed to update managedFields" err="failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)
1999556 - "master" pool should be updated before the CVO reports available at the new version occurred
1999578 - AWS EFS CSI tests are constantly failing
1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages
1999619 - cloudinit is malformatted if a user sets a password during VM creation flow
1999621 - Empty ssh_authorized_keys entry is added to VM's cloudinit if created from a customize flow
1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined
1999668 - openshift-install destroy cluster panic's when given invalid credentials to cloud provider (Azure Stack Hub)
1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource
1999771 - revert "force cert rotation every couple days for development" in 4.10
1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function
1999796 - Openshift Console Helm
tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace.
1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions
1999903 - Click "This is a CD-ROM boot source" ticking "Use template size PVC" on pvc upload form
1999983 - No way to clear upload error from template boot source
2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter
2000096 - Git URL is not re-validated on edit build-config form reload
2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig
2000236 - Confusing usage message from dynkeepalived CLI
2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported
2000430 - bump cluster-api-provider-ovirt version in installer
2000450 - 4.10: Enable static PV multi-az test
2000490 - All critical alerts shipped by CMO should have links to a runbook
2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)
2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster
2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled
2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console
2000754 - IPerf2 tests should be lower
2000846 - Structure logs in the entire codebase of Local Storage Operator
2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24
2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM
2000938 - CVO does not respect changes to a Deployment strategy
2000963 - 'Inline-volume (default fs)] volumes should store data' tests are failing on OKD with updated selinux-policy
2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don't have snapshot and should be fullClone
2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole
2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api
2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error
2001337 - Details Card in ODF Dashboard mentions OCS
2001339 - fix text content hotplug
2001413 - [e2e][automation] add/delete nic and disk to template
2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log
2001442 - Empty termination.log file for the kube-apiserver has too permissive mode
2001479 - IBM Cloud DNS unable to create/update records
2001566 - Enable alerts for prometheus operator in UWM
2001575 - Clicking on the perspective switcher shows a white page with loader
2001577 - Quick search placeholder is not displayed properly when the search string is removed
2001578 - [e2e][automation] add tests for vm dashboard tab
2001605 - PVs remain in Released state for a long time after the claim is deleted
2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options
2001620 - Cluster becomes degraded if it can't talk to Manila
2001760 - While creating 'Backing Store', 'Bucket Class', 'Namespace Store' user is navigated to 'Installed Operators' page after clicking on ODF
2001761 - Unable to apply cluster operator storage for SNO on GCP platform.
2001765 - Some error message in the log of diskmaker-manager caused confusion
2001784 - show loading page before final results instead of showing a transient message No log files exist
2001804 - Reload feature on Environment section in Build Config form does not work properly
2001810 - cluster admin unable to view BuildConfigs in all namespaces
2001817 - Failed to load RoleBindings list that will lead to ‘Role name’ is not able to be selected on Create RoleBinding page as well
2001823 - OCM controller must update operator status
2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start
2001835 - Could not select image tag version when create app from dev console
2001855 - Add capacity is disabled for ocs-storagecluster
2001856 - Repeating event: MissingVersion no image found for operand pod
2001959 - Side nav list borders don't extend to edges of container
2002007 - Layout issue on "Something went wrong" page
2002010 - ovn-kube may never attempt to retry a pod creation
2002012 - Cannot change volume mode when cloning a VM from a template
2002027 - Two instances of Dotnet helm chart show as one in topology
2002075 - opm render does not automatically pulling in the image(s) used in the deployments
2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster
2002125 - Network policy details page heading should be updated to Network Policy details
2002133 - [e2e][automation] add support/virtualization and improve deleteResource
2002134 - [e2e][automation] add test to verify vm details tab
2002215 - Multipath day1 not working on s390x
2002238 - Image stream tag is not persisted when switching from yaml to form editor
2002262 - [vSphere] Incorrect user agent in vCenter sessions list
2002266 - SinkBinding create form doesn't allow to use subject name, instead of label selector
2002276 - OLM fails to upgrade operators immediately
2002300 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
2002354 - Missing DU configuration "Done" status reporting during ZTP flow
2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn't use commonjs
2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation
2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN
2002397 - Resources search is inconsistent
2002434 - CRI-O leaks some children PIDs
2002443 - Getting undefined error on create local volume set page
2002461 - DNS operator performs spurious updates in response to API's defaulting of service's internalTrafficPolicy
2002504 - When the openshift-cluster-storage-operator is degraded because of "VSphereProblemDetectorController_SyncError", the insights operator is not sending the logs from all pods.
2002559 - User preference for topology list view does not follow when a new namespace is created
2002567 - Upstream SR-IOV worker doc has broken links
2002588 - Change text to be sentence case to align with PF
2002657 - ovn-kube egress IP monitoring is using a random port over the node network
2002713 - CNO: OVN logs should have millisecond resolution
2002748 - [ICNI2] 'ErrorAddingLogicalPort' failed to handle external GW check: timeout waiting for namespace event
2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite
2002763 - Two storage systems getting created with external mode RHCS
2002808 - KCM does not use web identity credentials
2002834 - Cluster-version operator does not remove unrecognized volume mounts
2002896 - Incorrect result return when user filter data by name on search page
2002950 - Why spec.containers.command is not created with "oc create deploymentconfig Create VM
missing permissions alert
2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]
2034287 - do not block upgrades if we can't create storageclass in 4.10 in vsphere
2034300 - Du validator policy is NonCompliant after DU configuration completed
2034319 - Negation constraint is not validating packages
2034322 - CNO doesn't pick up settings required when ExternalControlPlane topology
2034350 - The CNO should implement the Whereabouts IP reconciliation cron job
2034362 - update description of disk interface
2034398 - The Whereabouts IPPools CRD should include the podref field
2034409 - Default CatalogSources should be pointing to 4.10 index images
2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics
2034413 - cloud-network-config-controller fails to init with secret "cloud-credentials" not found in manual credential mode
2034460 - Summary: cloud-network-config-controller does not account for different environment
2034474 - Template's boot source is "Unknown source" before and after set enableCommonBootImageImport to true
2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren't working properly
2034493 - Change cluster version operator log level
2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list
2034527 - IPI deployment fails 'timeout reached while inspecting the node' when provisioning network ipv6
2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer
2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART
2034537 - Update team
2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds
2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success
2034577 - Current OVN gateway mode should be reflected on node annotation as well
2034621 - context menu not popping up for application group
2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10
2034624 - Warn about unsupported CSI driver in vsphere operator
2034647 - missing volumes list in snapshot modal
2034648 - Rebase openshift-controller-manager to 1.23
2034650 - Rebase openshift/builder to 1.23
2034705 - vSphere: storage e2e tests logging configuration data
2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail.
2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment
2034785 - ptpconfig with summary_interval cannot be applied
2034823 - RHEL9 should be starred in template list
2034838 - An external router can inject routes if no service is added
2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent
2034879 - Lifecycle hook's name and owner shouldn't be allowed to be empty
2034881 - Cloud providers components should use K8s 1.23 dependencies
2034884 - ART cannot build the image because it tries to download controller-gen
2034889 - oc adm prune deployments
does not work
2034898 - Regression in recently added Events feature
2034957 - update openshift-apiserver to kube 1.23.1
2035015 - ClusterLogForwarding CR remains stuck remediating forever
2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster
2035141 - [RFE] Show GPU/Host devices in template's details tab
2035146 - "kubevirt-plugin~PVC cannot be empty" shows on add-disk modal while adding existing PVC
2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting
2035199 - IPv6 support in mtu-migration-dispatcher.yaml
2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing
2035250 - Peering with ebgp peer over multi-hops doesn't work
2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices
2035315 - invalid test cases for AWS passthrough mode
2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env
2035321 - Add Sprint 211 translations
2035326 - [ExternalCloudProvider] installation with additional network on workers fails
2035328 - Ccoctl does not ignore credentials request manifest marked for deletion
2035333 - Kuryr orphans ports on 504 errors from Neutron
2035348 - Fix two grammar issues in kubevirt-plugin.json strings
2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets
2035409 - OLM E2E test depends on operator package that's no longer published
2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address
2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to 'ecs-cn-hangzhou.aliyuncs.com' timeout, although the specified region is 'us-east-1'
2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster
2035467 - UI: Queried metrics can't be ordered on Oberve->Metrics page
2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers
2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class
2035602 - [e2e][automation] add tests for Virtualization Overview page cards
2035703 - Roles -> RoleBindings tab doesn't show RoleBindings correctly
2035704 - RoleBindings list page filter doesn't apply
2035705 - Azure 'Destroy cluster' get stuck when the cluster resource group is already not existing.
2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed
2035772 - AccessMode and VolumeMode is not reserved for customize wizard
2035847 - Two dashes in the Cronjob / Job pod name
2035859 - the output of opm render doesn't contain olm.constraint which is defined in dependencies.yaml
2035882 - [BIOS setting values] Create events for all invalid settings in spec
2035903 - One redundant capi-operator credential requests in “oc adm extract --credentials-requests”
2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen
2035927 - Cannot enable HighNodeUtilization scheduler profile
2035933 - volume mode and access mode are empty in customize wizard review tab
2035969 - "ip a " shows "Error: Peer netns reference is invalid" after create test pods
2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation
2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error
2036029 - New added cloud-network-config operator doesn’t supported aws sts format credential
2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend
2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes
2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23
2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23
2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments
2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists
2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected
2036826 - oc adm prune deployments
can prune the RC/RS
2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform
2036861 - kube-apiserver is degraded while enable multitenant
2036937 - Command line tools page shows wrong download ODO link
2036940 - oc registry login fails if the file is empty or stdout
2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container
2036989 - Route URL copy to clipboard button wraps to a separate line by itself
2036990 - ZTP "DU Done inform policy" never becomes compliant on multi-node clusters
2036993 - Machine API components should use Go lang version 1.17
2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log.
2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api
2037073 - Alertmanager container fails to start because of startup probe never being successful
2037075 - Builds do not support CSI volumes
2037167 - Some log level in ibm-vpc-block-csi-controller are hard code
2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles
2037182 - PingSource badge color is not matched with knativeEventing color
2037203 - "Running VMs" card is too small in Virtualization Overview
2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly
2037237 - Add "This is a CD-ROM boot source" to customize wizard
2037241 - default TTL for noobaa cache buckets should be 0
2037246 - Cannot customize auto-update boot source
2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately
2037288 - Remove stale image reference
2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources
2037483 - Rbacs for Pods within the CBO should be more restrictive
2037484 - Bump dependencies to k8s 1.23
2037554 - Mismatched wave number error message should include the wave numbers that are in conflict
2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]
2037635 - impossible to configure custom certs for default console route in ingress config
2037637 - configure custom certificate for default console route doesn't take effect for OCP >= 4.8
2037638 - Builds do not support CSI volumes as volume sources
2037664 - text formatting issue in Installed Operators list table
2037680 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037689 - [IPI on Alibabacloud] sometimes operator 'cloud-controller-manager' tells empty VERSION, due to conflicts on listening tcp :8080
2037801 - Serverless installation is failing on CI jobs for e2e tests
2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format
2037856 - use lease for leader election
2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10
2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests
2037904 - upgrade operator deployment failed due to memory limit too low for manager container
2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]
2038034 - non-privileged user cannot see auto-update boot source
2038053 - Bump dependencies to k8s 1.23
2038088 - Remove ipa-downloader references
2038160 - The default
project missed the annotation : openshift.io/node-selector: ""
2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional
2038196 - must-gather is missing collecting some metal3 resources
2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)
2038253 - Validator Policies are long lived
2038272 - Failures to build a PreprovisioningImage are not reported
2038384 - Azure Default Instance Types are Incorrect
2038389 - Failing test: [sig-arch] events should not repeat pathologically
2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket
2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips
2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained
2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect
2038663 - update kubevirt-plugin OWNERS
2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via "oc adm groups new"
2038705 - Update ptp reviewers
2038761 - Open Observe->Targets page, wait for a while, page become blank
2038768 - All the filters on the Observe->Targets page can't work
2038772 - Some monitors failed to display on Observe->Targets page
2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node
2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces
2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard
2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation
2038864 - E2E tests fail because multi-hop-net was not created
2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console
2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured
2038968 - Move feature gates from a carry patch to openshift/api
2039056 - Layout issue with breadcrumbs on API explorer page
2039057 - Kind column is not wide enough in API explorer page
2039064 - Bulk Import e2e test flaking at a high rate
2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled
2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters
2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost
2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy
2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator
2039170 - [upgrade]Error shown on registry operator "missing the cloud-provider-config configmap" after upgrade
2039227 - Improve image customization server parameter passing during installation
2039241 - Improve image customization server parameter passing during installation
2039244 - Helm Release revision history page crashes the UI
2039294 - SDN controller metrics cannot be consumed correctly by prometheus
2039311 - oc Does Not Describe Build CSI Volumes
2039315 - Helm release list page should only fetch secrets for deployed charts
2039321 - SDN controller metrics are not being consumed by prometheus
2039330 - Create NMState button doesn't work in OperatorHub web console
2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations
2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters.
2039359 - oc adm prune deployments
can't prune the RS where the associated Deployment no longer exists
2039382 - gather_metallb_logs does not have execution permission
2039406 - logout from rest session after vsphere operator sync is finished
2039408 - Add GCP region northamerica-northeast2 to allowed regions
2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration
2039425 - No need to set KlusterletAddonConfig CR applicationManager->enabled: true in RAN ztp deployment
2039491 - oc - git:// protocol used in unit tests
2039516 - Bump OVN to ovn21.12-21.12.0-25
2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate
2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled
2039541 - Resolv-prepender script duplicating entries
2039586 - [e2e] update centos8 to centos stream8
2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty
2039619 - [AWS] In tree provisioner storageclass aws disk type should contain 'gp3' and csi provisioner storageclass default aws disk type should be 'gp3'
2039670 - Create PDBs for control plane components
2039678 - Page goes blank when create image pull secret
2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported
2039743 - React missing key warning when open operator hub detail page (and maybe others as well)
2039756 - React missing key warning when open KnativeServing details
2039770 - Observe dashboard doesn't react on time-range changes after browser reload when perspective is changed in another tab
2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard
2039781 - [GSS] OBC is not visible by admin of a Project on Console
2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector
2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled
2039880 - Log level too low for control plane metrics
2039919 - Add E2E test for router compression feature
2039981 - ZTP for standard clusters installs stalld on master nodes
2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead
2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced
2040143 - [IPI on Alibabacloud] suggest to remove region "cn-nanjing" or provide better error message
2040150 - Update ConfigMap keys for IBM HPCS
2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth
2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository
2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp
2040376 - "unknown instance type" error for supported m6i.xlarge instance
2040394 - Controller: enqueue the failed configmap till services update
2040467 - Cannot build ztp-site-generator container image
2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn't take affect in OpenShift 4
2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps
2040535 - Auto-update boot source is not available in customize wizard
2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name
2040603 - rhel worker scaleup playbook failed because missing some dependency of podman
2040616 - rolebindings page doesn't load for normal users
2040620 - [MAPO] Error pulling MAPO image on installation
2040653 - Topology sidebar warns that another component is updated while rendering
2040655 - User settings update fails when selecting application in topology sidebar
2040661 - Different react warnings about updating state on unmounted components when leaving topology
2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation
2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi
2040694 - Three upstream HTTPClientConfig struct fields missing in the operator
2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers
2040710 - cluster-baremetal-operator cannot update BMC subscription CR
2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms
2040782 - Import YAML page blocks input with more then one generateName attribute
2040783 - The Import from YAML summary page doesn't show the resource name if created via generateName attribute
2040791 - Default PGT policies must be 'inform' to integrate with the Lifecycle Operator
2040793 - Fix snapshot e2e failures
2040880 - do not block upgrades if we can't connect to vcenter
2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10
2041093 - autounattend.xml missing
2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates
2041319 - [IPI on Alibabacloud] installation in region "cn-shanghai" failed, due to "Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped"
2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23
2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller
2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener
2041441 - Provision volume with size 3000Gi even if sizeRange: '[10-2000]GiB' in storageclass on IBM cloud
2041466 - Kubedescheduler version is missing from the operator logs
2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses
2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)
2041492 - Spacing between resources in inventory card is too small
2041509 - GCP Cloud provider components should use K8s 1.23 dependencies
2041510 - cluster-baremetal-operator doesn't run baremetal-operator's subscription webhook
2041541 - audit: ManagedFields are dropped using API not annotation
2041546 - ovnkube: set election timer at RAFT cluster creation time
2041554 - use lease for leader election
2041581 - KubeDescheduler operator log shows "Use of insecure cipher detected"
2041583 - etcd and api server cpu mask interferes with a guaranteed workload
2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure
2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation
2041620 - bundle CSV alm-examples does not parse
2041641 - Fix inotify leak and kubelet retaining memory
2041671 - Delete templates leads to 404 page
2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category
2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled
2041750 - [IPI on Alibabacloud] trying "create install-config" with region "cn-wulanchabu (China (Ulanqab))" (or "ap-southeast-6 (Philippines (Manila))", "cn-guangzhou (China (Guangzhou))") failed due to invalid endpoint
2041763 - The Observe > Alerting pages no longer have their default sort order applied
2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken
2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied
2041882 - cloud-network-config operator can't work normal on GCP workload identity cluster
2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases
2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist
2041971 - [vsphere] Reconciliation of mutating webhooks didn't happen
2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile
2041999 - [PROXY] external dns pod cannot recognize custom proxy CA
2042001 - unexpectedly found multiple load balancers
2042029 - kubedescheduler fails to install completely
2042036 - [IBMCLOUD] "openshift-install explain installconfig.platform.ibmcloud" contains not yet supported custom vpc parameters
2042049 - Seeing warning related to unrecognized feature gate in kubescheduler & KCM logs
2042059 - update discovery burst to reflect lots of CRDs on openshift clusters
2042069 - Revert toolbox to rhcos-toolbox
2042169 - Can not delete egressnetworkpolicy in Foreground propagation
2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool
2042265 - [IBM]"--scale-down-utilization-threshold" doesn't work on IBMCloud
2042274 - Storage API should be used when creating a PVC
2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection
2042366 - Lifecycle hooks should be independently managed
2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway
2042382 - [e2e][automation] CI takes more then 2 hours to run
2042395 - Add prerequisites for active health checks test
2042438 - Missing rpms in openstack-installer image
2042466 - Selection does not happen when switching from Topology Graph to List View
2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver
2042567 - insufficient info on CodeReady Containers configuration
2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk
2042619 - Overview page of the console is broken for hypershift clusters
2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running
2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud
2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud
2042770 - [IPI on Alibabacloud] with vpcID & vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly
2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)
2042851 - Create template from SAP HANA template flow - VM is created instead of a new template
2042906 - Edit machineset with same machine deletion hook name succeed
2042960 - azure-file CI fails with "gid(0) in storageClass and pod fsgroup(1000) are not equal"
2043003 - [IPI on Alibabacloud] 'destroy cluster' of a failed installation (bug2041694) stuck after 'stage=Nat gateways'
2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]
2043043 - Cluster Autoscaler should use K8s 1.23 dependencies
2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)
2043078 - Favorite system projects not visible in the project selector after toggling "Show default projects".
2043117 - Recommended operators links are erroneously treated as external
2043130 - Update CSI sidecars to the latest release for 4.10
2043234 - Missing validation when creating several BGPPeers with the same peerAddress
2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler
2043254 - crio does not bind the security profiles directory
2043296 - Ignition fails when reusing existing statically-keyed LUKS volume
2043297 - [4.10] Bootimage bump tracker
2043316 - RHCOS VM fails to boot on Nutanix AOS
2043446 - Rebase aws-efs-utils to the latest upstream version.
2043556 - Add proper ci-operator configuration to ironic and ironic-agent images
2043577 - DPU network operator
2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator
2043675 - Too many machines deleted by cluster autoscaler when scaling down
2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation
2043709 - Logging flags no longer being bound to command line
2043721 - Installer bootstrap hosts using outdated kubelet containing bugs
2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather
2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23
2043780 - Bump router to k8s.io/api 1.23
2043787 - Bump cluster-dns-operator to k8s.io/api 1.23
2043801 - Bump CoreDNS to k8s.io/api 1.23
2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown
2043961 - [OVN-K] If pod creation fails, retry doesn't work as expected.
2044201 - Templates golden image parameters names should be supported
2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]
2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter “csi.storage.k8s.io/fstype” create pvc,pod successfully but write data to the pod's volume failed of "Permission denied"
2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects
2044347 - Bump to kubernetes 1.23.3
2044481 - collect sharedresource cluster scoped instances with must-gather
2044496 - Unable to create hardware events subscription - failed to add finalizers
2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources
2044680 - Additional libovsdb performance and resource consumption fixes
2044704 - Observe > Alerting pages should not show runbook links in 4.10
2044717 - [e2e] improve tests for upstream test environment
2044724 - Remove namespace column on VM list page when a project is selected
2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff
2044808 - machine-config-daemon-pull.service: use cp
instead of cat
when extracting MCD in OKD
2045024 - CustomNoUpgrade alerts should be ignored
2045112 - vsphere-problem-detector has missing rbac rules for leases
2045199 - SnapShot with Disk Hot-plug hangs
2045561 - Cluster Autoscaler should use the same default Group value as Cluster API
2045591 - Reconciliation of aws pod identity mutating webhook did not happen
2045849 - Add Sprint 212 translations
2045866 - MCO Operator pod spam "Error creating event" warning messages in 4.10
2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin
2045916 - [IBMCloud] Default machine profile in installer is unreliable
2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment
2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify
2046137 - oc output for unknown commands is not human readable
2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance
2046297 - Bump DB reconnect timeout
2046517 - In Notification drawer, the "Recommendations" header shows when there isn't any recommendations
2046597 - Observe > Targets page may show the wrong service monitor is multiple monitors have the same namespace & label selectors
2046626 - Allow setting custom metrics for Ansible-based Operators
2046683 - [AliCloud]"--scale-down-utilization-threshold" doesn't work on AliCloud
2047025 - Installation fails because of Alibaba CSI driver operator is degraded
2047190 - Bump Alibaba CSI driver for 4.10
2047238 - When using communities and localpreferences together, only localpreference gets applied
2047255 - alibaba: resourceGroupID not found
2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions
2047317 - Update HELM OWNERS files under Dev Console
2047455 - [IBM Cloud] Update custom image os type
2047496 - Add image digest feature
2047779 - do not degrade cluster if storagepolicy creation fails
2047927 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2047929 - use lease for leader election
2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]
2048046 - New route annotation to show another URL or hide topology URL decorator doesn't work for Knative Services
2048048 - Application tab in User Preferences dropdown menus are too wide.
2048050 - Topology list view items are not highlighted on keyboard navigation
2048117 - [IBM]Shouldn't change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value
2048413 - Bond CNI: Failed to attach Bond NAD to pod
2048443 - Image registry operator panics when finalizes config deletion
2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*
2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt
2048598 - Web terminal view is broken
2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure
2048891 - Topology page is crashed
2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class
2049043 - Cannot create VM from template
2049156 - 'oc get project' caused 'Observed a panic: cannot deep copy core.NamespacePhase' when AllRequestBodies is used
2049886 - Placeholder bug for OCP 4.10.0 metadata release
2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning
2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2
2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0
2050227 - Installation on PSI fails with: 'openstack platform does not have the required standard-attr-tag network extension'
2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]
2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members
2050310 - ContainerCreateError when trying to launch large (>500) numbers of pods across nodes
2050370 - alert data for burn budget needs to be updated to prevent regression
2050393 - ZTP missing support for local image registry and custom machine config
2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud
2050737 - Remove metrics and events for master port offsets
2050801 - Vsphere upi tries to access vsphere during manifests generation phase
2050883 - Logger object in LSO does not log source location accurately
2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit
2052062 - Whereabouts should implement client-go 1.22+
2052125 - [4.10] Crio appears to be coredumping in some scenarios
2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config
2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade.
2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests
2052598 - kube-scheduler should use configmap lease
2052599 - kube-controller-manger should use configmap lease
2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh
2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics vsphere_rwx_volumes_total
not valid
2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop
2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set.
2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1
2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch
2052756 - [4.10] PVs are not being cleaned up after PVC deletion
2053175 - oc adm catalog mirror throws 'missing signature key' error when using file://local/index
2053218 - ImagePull fails with error "unable to pull manifest from example.com/busy.box:v5 invalid reference format"
2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs
2053268 - inability to detect static lifecycle failure
2053314 - requestheader IDP test doesn't wait for cleanup, causing high failure rates
2053323 - OpenShift-Ansible BYOH Unit Tests are Broken
2053339 - Remove dev preview badge from IBM FlashSystem deployment windows
2053751 - ztp-site-generate container is missing convenience entrypoint
2053945 - [4.10] Failed to apply sriov policy on intel nics
2054109 - Missing "app" label
2054154 - RoleBinding in project without subject is causing "Project access" page to fail
2054244 - Latest pipeline run should be listed on the top of the pipeline run list
2054288 - console-master-e2e-gcp-console is broken
2054562 - DPU network operator 4.10 branch need to sync with master
2054897 - Unable to deploy hw-event-proxy operator
2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently
2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line
2055371 - Remove Check which enforces summary_interval must match logSyncInterval
2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11
2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API
2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured
2056479 - ovirt-csi-driver-node pods are crashing intermittently
2056572 - reconcilePrecaching error: cannot list resource "clusterserviceversions" in API group "operators.coreos.com" at the cluster scope"
2056629 - [4.10] EFS CSI driver can't unmount volumes with "wait: no child processes"
2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs
2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation
2056948 - post 1.23 rebase: regression in service-load balancer reliability
2057438 - Service Level Agreement (SLA) always show 'Unknown'
2057721 - Fix Proxy support in RHACM 2.4.2
2057724 - Image creation fails when NMstateConfig CR is empty
2058641 - [4.10] Pod density test causing problems when using kube-burner
2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install
2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials
2060956 - service domain can't be resolved when networkpolicy is used in OCP 4.10-rc
- References:
https://access.redhat.com/security/cve/CVE-2014-3577 https://access.redhat.com/security/cve/CVE-2016-10228 https://access.redhat.com/security/cve/CVE-2017-14502 https://access.redhat.com/security/cve/CVE-2018-20843 https://access.redhat.com/security/cve/CVE-2018-1000858 https://access.redhat.com/security/cve/CVE-2019-8625 https://access.redhat.com/security/cve/CVE-2019-8710 https://access.redhat.com/security/cve/CVE-2019-8720 https://access.redhat.com/security/cve/CVE-2019-8743 https://access.redhat.com/security/cve/CVE-2019-8764 https://access.redhat.com/security/cve/CVE-2019-8766 https://access.redhat.com/security/cve/CVE-2019-8769 https://access.redhat.com/security/cve/CVE-2019-8771 https://access.redhat.com/security/cve/CVE-2019-8782 https://access.redhat.com/security/cve/CVE-2019-8783 https://access.redhat.com/security/cve/CVE-2019-8808 https://access.redhat.com/security/cve/CVE-2019-8811 https://access.redhat.com/security/cve/CVE-2019-8812 https://access.redhat.com/security/cve/CVE-2019-8813 https://access.redhat.com/security/cve/CVE-2019-8814 https://access.redhat.com/security/cve/CVE-2019-8815 https://access.redhat.com/security/cve/CVE-2019-8816 https://access.redhat.com/security/cve/CVE-2019-8819 https://access.redhat.com/security/cve/CVE-2019-8820 https://access.redhat.com/security/cve/CVE-2019-8823 https://access.redhat.com/security/cve/CVE-2019-8835 https://access.redhat.com/security/cve/CVE-2019-8844 https://access.redhat.com/security/cve/CVE-2019-8846 https://access.redhat.com/security/cve/CVE-2019-9169 https://access.redhat.com/security/cve/CVE-2019-13050 https://access.redhat.com/security/cve/CVE-2019-13627 https://access.redhat.com/security/cve/CVE-2019-14889 https://access.redhat.com/security/cve/CVE-2019-15903 https://access.redhat.com/security/cve/CVE-2019-19906 https://access.redhat.com/security/cve/CVE-2019-20454 https://access.redhat.com/security/cve/CVE-2019-20807 https://access.redhat.com/security/cve/CVE-2019-25013 https://access.redhat.com/security/cve/CVE-2020-1730 https://access.redhat.com/security/cve/CVE-2020-3862 https://access.redhat.com/security/cve/CVE-2020-3864 https://access.redhat.com/security/cve/CVE-2020-3865 https://access.redhat.com/security/cve/CVE-2020-3867 https://access.redhat.com/security/cve/CVE-2020-3868 https://access.redhat.com/security/cve/CVE-2020-3885 https://access.redhat.com/security/cve/CVE-2020-3894 https://access.redhat.com/security/cve/CVE-2020-3895 https://access.redhat.com/security/cve/CVE-2020-3897 https://access.redhat.com/security/cve/CVE-2020-3899 https://access.redhat.com/security/cve/CVE-2020-3900 https://access.redhat.com/security/cve/CVE-2020-3901 https://access.redhat.com/security/cve/CVE-2020-3902 https://access.redhat.com/security/cve/CVE-2020-8927 https://access.redhat.com/security/cve/CVE-2020-9802 https://access.redhat.com/security/cve/CVE-2020-9803 https://access.redhat.com/security/cve/CVE-2020-9805 https://access.redhat.com/security/cve/CVE-2020-9806 https://access.redhat.com/security/cve/CVE-2020-9807 https://access.redhat.com/security/cve/CVE-2020-9843 https://access.redhat.com/security/cve/CVE-2020-9850 https://access.redhat.com/security/cve/CVE-2020-9862 https://access.redhat.com/security/cve/CVE-2020-9893 https://access.redhat.com/security/cve/CVE-2020-9894 https://access.redhat.com/security/cve/CVE-2020-9895 https://access.redhat.com/security/cve/CVE-2020-9915 https://access.redhat.com/security/cve/CVE-2020-9925 https://access.redhat.com/security/cve/CVE-2020-9952 https://access.redhat.com/security/cve/CVE-2020-10018 https://access.redhat.com/security/cve/CVE-2020-11793 https://access.redhat.com/security/cve/CVE-2020-13434 https://access.redhat.com/security/cve/CVE-2020-14391 https://access.redhat.com/security/cve/CVE-2020-15358 https://access.redhat.com/security/cve/CVE-2020-15503 https://access.redhat.com/security/cve/CVE-2020-25660 https://access.redhat.com/security/cve/CVE-2020-25677 https://access.redhat.com/security/cve/CVE-2020-27618 https://access.redhat.com/security/cve/CVE-2020-27781 https://access.redhat.com/security/cve/CVE-2020-29361 https://access.redhat.com/security/cve/CVE-2020-29362 https://access.redhat.com/security/cve/CVE-2020-29363 https://access.redhat.com/security/cve/CVE-2021-3121 https://access.redhat.com/security/cve/CVE-2021-3326 https://access.redhat.com/security/cve/CVE-2021-3449 https://access.redhat.com/security/cve/CVE-2021-3450 https://access.redhat.com/security/cve/CVE-2021-3516 https://access.redhat.com/security/cve/CVE-2021-3517 https://access.redhat.com/security/cve/CVE-2021-3518 https://access.redhat.com/security/cve/CVE-2021-3520 https://access.redhat.com/security/cve/CVE-2021-3521 https://access.redhat.com/security/cve/CVE-2021-3537 https://access.redhat.com/security/cve/CVE-2021-3541 https://access.redhat.com/security/cve/CVE-2021-3733 https://access.redhat.com/security/cve/CVE-2021-3749 https://access.redhat.com/security/cve/CVE-2021-20305 https://access.redhat.com/security/cve/CVE-2021-21684 https://access.redhat.com/security/cve/CVE-2021-22946 https://access.redhat.com/security/cve/CVE-2021-22947 https://access.redhat.com/security/cve/CVE-2021-25215 https://access.redhat.com/security/cve/CVE-2021-27218 https://access.redhat.com/security/cve/CVE-2021-30666 https://access.redhat.com/security/cve/CVE-2021-30761 https://access.redhat.com/security/cve/CVE-2021-30762 https://access.redhat.com/security/cve/CVE-2021-33928 https://access.redhat.com/security/cve/CVE-2021-33929 https://access.redhat.com/security/cve/CVE-2021-33930 https://access.redhat.com/security/cve/CVE-2021-33938 https://access.redhat.com/security/cve/CVE-2021-36222 https://access.redhat.com/security/cve/CVE-2021-37750 https://access.redhat.com/security/cve/CVE-2021-39226 https://access.redhat.com/security/cve/CVE-2021-41190 https://access.redhat.com/security/cve/CVE-2021-43813 https://access.redhat.com/security/cve/CVE-2021-44716 https://access.redhat.com/security/cve/CVE-2021-44717 https://access.redhat.com/security/cve/CVE-2022-0532 https://access.redhat.com/security/cve/CVE-2022-21673 https://access.redhat.com/security/cve/CVE-2022-24407 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL 0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne eGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM CEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF aDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC Y/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp sQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO RDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN rs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry bSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z 7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT b5PUYUBIZLc= =GUDA -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:
The Migration Toolkit for Containers (MTC) 1.6.0 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Solution:
For details on how to install and use MTC, refer to:
https://docs.openshift.com/container-platform/4.8/migration_toolkit_for_con tainers/installing-mtc.html
- Bugs fixed (https://bugzilla.redhat.com/):
1878824 - Web console is not accessible when deployed on OpenShift cluster on IBM Cloud 1887526 - "Stage" pods fail when migrating from classic OpenShift source cluster on IBM Cloud with block storage 1899562 - MigMigration custom resource does not display an error message when a migration fails because of volume mount error 1936886 - Service account token of existing remote cluster cannot be updated by using the web console 1936894 - "Ready" status of MigHook and MigPlan custom resources is not synchronized automatically 1949117 - "Migration plan resources" page displays a permanent error message when a migration plan is deleted from the backend 1951869 - MigPlan custom resource does not detect invalid source cluster reference 1968621 - Paused deployment config causes a migration to hang 1970338 - Parallel migrations fail because the initial backup is missing 1974737 - Migration plan name length in the "Migration plan" wizard is not validated 1975369 - "Debug view" link text on "Migration plans" page can be improved 1975372 - Destination namespace in MigPlan custom resource is not validated 1976895 - Namespace mapping cannot be changed using the Migration Plan wizard 1981810 - "Excluded" resources are not excluded from the migration 1982026 - Direct image migration fails if the source URI contains a double slash ("//") 1994985 - Web console crashes when a MigPlan custom resource is created with an empty namespaces list 1996169 - When "None" is selected as the target storage class in the web console, the setting is ignored and the default storage class is used 1996627 - MigPlan custom resource displays a "PvUsageAnalysisFailed" warning after a successful PVC migration 1996784 - "Migration resources" tree on the "Migration details" page is not displayed 1996902 - "Select all" checkbox on the "Namespaces" page of the "Migration plan" wizard remains selected after a namespace is unselected 1996904 - "Migration" dialogs on the "Migration plans" page display inconsistent capitalization 1996906 - "Migration details" page link is displayed for a migration plan with no associated migrations 1996938 - Search function on "Migration plans" page displays no results 1997051 - Indirect migration from MTC 1.5.1 to 1.6.0 fails during "StageBackup" phase 1997127 - Direct volume migration "retry" feature does not work correctly after a network failure 1997173 - Migration of custom resource definitions to OpenShift Container Platform 4.9 fails because of API version incompatibility 1997180 - "migration-log-reader" pod does not log invalid Rsync options 1997665 - Selected PVCs in the "State migration" dialog are reset because of background polling 1997694 - "Update operator" link on the "Clusters" page is incorrect 1997827 - "Migration plan" wizard displays PVC names incorrectly formatted after running state migration 1998062 - Rsync pod uses upstream image 1998283 - "Migration step details" link on the "Migrations" page does not work 1998550 - "Migration plan" wizard does not support certain screen resolutions 1998581 - "Migration details" link on "Migration plans" page displays "latestIsFailed" error 1999113 - "oc describe" and "oc log" commands on "Migration resources" tree cannot be copied after failed migration 1999381 - MigPlan custom resource displays "Stage completed with warnings" status after successful migration 1999528 - Position of the "Add migration plan" button is different from the other "Add" buttons 1999765 - "Migrate" button on "State migration" dialog is enabled when no PVCs are selected 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2000205 - "Options" menu on the "Migration details" page displays incorrect items 2000218 - Validation incorrectly blocks namespace mapping if a source cluster namespace is the same as the destination namespace 2000243 - "Migration plan" wizard does not allow a migration within the same cluster 2000644 - Invalid migration plan causes "controller" pod to crash 2000875 - State migration status on "Migrations" page displays "Stage succeeded" message 2000979 - "clusterIPs" parameter of "service" object can cause Velero errors 2001089 - Direct volume migration fails because of missing CA path configuration 2001173 - Migration plan requires two clusters 2001786 - Migration fails during "Stage Backup" step because volume path on host not found 2001829 - Migration does not complete when the namespace contains a cron job with a PVC 2001941 - Fixing PVC conflicts in state migration plan using the web console causes the migration to run twice 2002420 - "Stage" pod not created for completed application pod, causing the "mig-controller" to stall 2002608 - Migration of unmounted PVC fails during "StageBackup" phase 2002897 - Rollback migration does not complete when the namespace contains a cron job 2003603 - "View logs" dialog displays the "--selector" option, which does not print all logs 2004601 - Migration plan status on "Migration plans" page is "Ready" after migration completed with warnings 2004923 - Web console displays "New operator version available" notification for incorrect operator 2005143 - Combining Rsync and Stunnel in a single pod can degrade performance 2006316 - Web console cannot create migration plan in a proxy environment 2007175 - Web console cannot be launched in a proxy environment
- JIRA issues fixed (https://issues.jboss.org/):
MIG-785 - Search for "Crane" in the Operator Hub should display the Migration Toolkit for Containers
- Description:
The release of RHACS 3.67 provides the following new features, bug fixes, security patches and system changes:
OpenShift Dedicated support
RHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on Amazon Web Services and Google Cloud Platform. Use OpenShift OAuth server as an identity provider If you are using RHACS with OpenShift, you can now configure the built-in OpenShift OAuth server as an identity provider for RHACS. Enhancements for CI outputs Red Hat has improved the usability of RHACS CI integrations. CI outputs now show additional detailed information about the vulnerabilities and the security policies responsible for broken builds. Runtime Class policy criteria Users can now use RHACS to define the container runtime configuration that may be used to run a pod’s containers using the Runtime Class policy criteria.
Bug Fixes The release of RHACS 3.67 includes the following bug fixes:
- Previously, when using RHACS with the Compliance Operator integration, RHACS did not respect or populate Compliance Operator TailoredProfiles. This has been fixed. Previously, the Alpine Linux package manager (APK) in Image policy looked for the presence of apk package in the image rather than the apk-tools package. This issue has been fixed.
System changes The release of RHACS 3.67 includes the following system changes:
- Scanner now identifies vulnerabilities in Ubuntu 21.10 images. The Port exposure method policy criteria now include route as an exposure method. The OpenShift: Kubeadmin Secret Accessed security policy now allows the OpenShift Compliance Operator to check for the existence of the Kubeadmin secret without creating a violation. The OpenShift Compliance Operator integration now supports using TailoredProfiles. The RHACS Jenkins plugin now provides additional security information. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for Central, the logs contain the Request URI and X-Forwarded-For header values. The default uid:gid pair for the Scanner image is now 65534:65534. RHACS adds a new default Scope Manager role that includes minimum permissions to create and modify access scopes. In addition to manually uploading vulnerability definitions in offline mode, you can now upload definitions in online mode. You can now format the output of the following roxctl CLI commands in table, csv, or JSON format: image scan, image check & deployment check
-
You can now use a regular expression for the deployment name while specifying policy exclusions
-
Solution:
To take advantage of these new features, fixes and changes, please upgrade Red Hat Advanced Cluster Security for Kubernetes to version 3.67. Bugs fixed (https://bugzilla.redhat.com/):
1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability 2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196) 2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API
- JIRA issues fixed (https://issues.jboss.org/):
RHACS-65 - Release RHACS 3.67.0
- Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana gement_for_kubernetes/2.4/html/release_notes/
Security fixes:
-
CVE-2021-33623: nodejs-trim-newlines: ReDoS in .end() method
-
CVE-2021-32626: redis: Lua scripts can overflow the heap-based Lua stack
-
CVE-2021-32627: redis: Integer overflow issue with Streams
-
CVE-2021-32628: redis: Integer overflow bug in the ziplist data structure
-
CVE-2021-32672: redis: Out of bounds read in lua debugger protocol parser
-
CVE-2021-32675: redis: Denial of service via Redis Standard Protocol (RESP) request
-
CVE-2021-32687: redis: Integer overflow issue with intsets
-
CVE-2021-32690: helm: information disclosure vulnerability
-
CVE-2021-32803: nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite
-
CVE-2021-32804: nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite
-
CVE-2021-23017: nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name
-
CVE-2021-3711: openssl: SM2 Decryption Buffer Overflow
-
CVE-2021-3712: openssl: Read buffer overruns processing ASN.1 strings
-
CVE-2021-3749: nodejs-axios: Regular expression denial of service in trim function
-
CVE-2021-41099: redis: Integer overflow issue with strings
Bug fixes:
- RFE ACM Application management UI doesn't reflect object status (Bugzilla
1965321)
-
RHACM 2.4 files (Bugzilla #1983663)
-
Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4 (Bugzilla #1993366)
-
submariner-addon pod failing in RHACM 2.4 latest ds snapshot (Bugzilla
1994668)
-
ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb (Bugzilla #2000274)
-
pre-network-manager-config failed due to timeout when static config is used (Bugzilla #2003915)
-
InfraEnv condition does not reflect the actual error message (Bugzilla
2009204, 2010030)
-
Flaky test point to a nil pointer conditions list (Bugzilla #2010175)
-
InfraEnv status shows 'Failed to create image: internal error (Bugzilla
2010272)
- subctl diagnose firewall intra-cluster - failed VXLAN checks (Bugzilla
2013157)
-
pre-network-manager-config failed due to timeout when static config is used (Bugzilla #2014084)
-
Bugs fixed (https://bugzilla.redhat.com/):
1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name 1965321 - RFE ACM Application management UI doesn't reflect object status 1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method 1978144 - CVE-2021-32690 helm: information disclosure vulnerability 1983663 - RHACM 2.4.0 images 1990409 - CVE-2021-32804 nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite 1990415 - CVE-2021-32803 nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite 1993366 - Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4 1994668 - submariner-addon pod failing in RHACM 2.4 latest ds snapshot 1995623 - CVE-2021-3711 openssl: SM2 Decryption Buffer Overflow 1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings 1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function 2000274 - ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb 2003915 - pre-network-manager-config failed due to timeout when static config is used 2009204 - InfraEnv condition does not reflect the actual error message 2010030 - InfraEnv condition does not reflect the actual error message 2010175 - Flaky test point to a nil pointer conditions list 2010272 - InfraEnv status shows 'Failed to create image: internal error 2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets 2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request 2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser 2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure 2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams 2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack 2011020 - CVE-2021-41099 redis: Integer overflow issue with strings 2013157 - subctl diagnose firewall intra-cluster - failed VXLAN checks 2014084 - pre-network-manager-config failed due to timeout when static config is used
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202108-1941", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "axios", "scope": "lte", "trust": 1.0, "vendor": "axios", "version": "0.21.1" }, { "model": "goldengate", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "21.7.0.0.0" }, { "model": "goldengate", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "21.1" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "axios", "scope": null, "trust": 0.8, "vendor": "axios", "version": null }, { "model": "axios", "scope": "eq", "trust": 0.8, "vendor": "axios", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "db": "NVD", "id": "CVE-2021-3749" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:axios:axios:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndIncluding": "0.21.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:goldengate:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "21.7.0.0.0", "versionStartIncluding": "21.1", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-3749" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Siemens reported these vulnerabilities to CISA.", "sources": [ { "db": "CNNVD", "id": "CNNVD-202108-2780" } ], "trust": 0.6 }, "cve": "CVE-2021-3749", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "COMPLETE", "baseScore": 7.8, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "impactScore": 6.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:C", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Complete", "baseScore": 7.8, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-3749", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "High", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:N/I:N/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" }, { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "security@huntr.dev", "availabilityImpact": "HIGH", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 7.5, "baseSeverity": "High", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-3749", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-3749", "trust": 1.8, "value": "HIGH" }, { "author": "security@huntr.dev", "id": "CVE-2021-3749", "trust": 1.0, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202104-975", "trust": 0.6, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202108-2780", "trust": 0.6, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2021-3749", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3749" }, { "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202108-2780" }, { "db": "NVD", "id": "CVE-2021-3749" }, { "db": "NVD", "id": "CVE-2021-3749" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "axios is vulnerable to Inefficient Regular Expression Complexity. axios Exists in a resource exhaustion vulnerability.Service operation interruption (DoS) It may be in a state. Pillow is a Python-based image processing library. \nThere is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. Relevant releases/architectures:\n\n2.0 - ppc64le, s390x, x86_64\n\n3. Solution:\n\nThe OpenShift Service Mesh release notes provide information on the\nfeatures and known issues:\n\nhttps://docs.openshift.com/container-platform/latest/service_mesh/v2x/servicemesh-release-notes.html\n\n5. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: OpenShift Container Platform 4.10.3 security update\nAdvisory ID: RHSA-2022:0056-01\nProduct: Red Hat OpenShift Enterprise\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:0056\nIssue date: 2022-03-10\nCVE Names: CVE-2014-3577 CVE-2016-10228 CVE-2017-14502 \n CVE-2018-20843 CVE-2018-1000858 CVE-2019-8625 \n CVE-2019-8710 CVE-2019-8720 CVE-2019-8743 \n CVE-2019-8764 CVE-2019-8766 CVE-2019-8769 \n CVE-2019-8771 CVE-2019-8782 CVE-2019-8783 \n CVE-2019-8808 CVE-2019-8811 CVE-2019-8812 \n CVE-2019-8813 CVE-2019-8814 CVE-2019-8815 \n CVE-2019-8816 CVE-2019-8819 CVE-2019-8820 \n CVE-2019-8823 CVE-2019-8835 CVE-2019-8844 \n CVE-2019-8846 CVE-2019-9169 CVE-2019-13050 \n CVE-2019-13627 CVE-2019-14889 CVE-2019-15903 \n CVE-2019-19906 CVE-2019-20454 CVE-2019-20807 \n CVE-2019-25013 CVE-2020-1730 CVE-2020-3862 \n CVE-2020-3864 CVE-2020-3865 CVE-2020-3867 \n CVE-2020-3868 CVE-2020-3885 CVE-2020-3894 \n CVE-2020-3895 CVE-2020-3897 CVE-2020-3899 \n CVE-2020-3900 CVE-2020-3901 CVE-2020-3902 \n CVE-2020-8927 CVE-2020-9802 CVE-2020-9803 \n CVE-2020-9805 CVE-2020-9806 CVE-2020-9807 \n CVE-2020-9843 CVE-2020-9850 CVE-2020-9862 \n CVE-2020-9893 CVE-2020-9894 CVE-2020-9895 \n CVE-2020-9915 CVE-2020-9925 CVE-2020-9952 \n CVE-2020-10018 CVE-2020-11793 CVE-2020-13434 \n CVE-2020-14391 CVE-2020-15358 CVE-2020-15503 \n CVE-2020-25660 CVE-2020-25677 CVE-2020-27618 \n CVE-2020-27781 CVE-2020-29361 CVE-2020-29362 \n CVE-2020-29363 CVE-2021-3121 CVE-2021-3326 \n CVE-2021-3449 CVE-2021-3450 CVE-2021-3516 \n CVE-2021-3517 CVE-2021-3518 CVE-2021-3520 \n CVE-2021-3521 CVE-2021-3537 CVE-2021-3541 \n CVE-2021-3733 CVE-2021-3749 CVE-2021-20305 \n CVE-2021-21684 CVE-2021-22946 CVE-2021-22947 \n CVE-2021-25215 CVE-2021-27218 CVE-2021-30666 \n CVE-2021-30761 CVE-2021-30762 CVE-2021-33928 \n CVE-2021-33929 CVE-2021-33930 CVE-2021-33938 \n CVE-2021-36222 CVE-2021-37750 CVE-2021-39226 \n CVE-2021-41190 CVE-2021-43813 CVE-2021-44716 \n CVE-2021-44717 CVE-2022-0532 CVE-2022-21673 \n CVE-2022-24407 \n=====================================================================\n\n1. Summary:\n\nRed Hat OpenShift Container Platform release 4.10.3 is now available with\nupdates to packages and images that fix several bugs and add enhancements. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.10.3. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHSA-2022:0055\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nSecurity Fix(es):\n\n* gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index\nvalidation (CVE-2021-3121)\n* grafana: Snapshot authentication bypass (CVE-2021-39226)\n* golang: net/http: limit growth of header canonicalization cache\n(CVE-2021-44716)\n* nodejs-axios: Regular expression denial of service in trim function\n(CVE-2021-3749)\n* golang: syscall: don\u0027t close fd 0 on ForkExec error (CVE-2021-44717)\n* grafana: Forward OAuth Identity Token can allow users to access some data\nsources (CVE-2022-21673)\n* grafana: directory traversal vulnerability (CVE-2021-43813)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-x86_64\n\nThe image digest is\nsha256:7ffe4cd612be27e355a640e5eec5cd8f923c1400d969fd590f806cffdaabcc56\n\n(For s390x architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-s390x\n\nThe image digest is\nsha256:4cf21a9399da1ce8427246f251ae5dedacfc8c746d2345f9cfe039ed9eda3e69\n\n(For ppc64le architecture)\n\n $ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.10.3-ppc64le\n\nThe image digest is\nsha256:4ee571da1edf59dfee4473aa4604aba63c224bf8e6bcf57d048305babbbde93c\n\nAll OpenShift Container Platform 4.10 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.10 see the following documentation,\nwhich will be updated shortly for this release, for moderate instructions\non how to upgrade your cluster and fully apply this asynchronous errata\nupdate:\n\nhttps://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1808240 - Always return metrics value for pods under the user\u0027s namespace\n1815189 - feature flagged UI does not always become available after operator installation\n1825034 - e2e: Mock CSI tests fail on IBM ROKS clusters\n1826225 - edge terminated h2 (gRPC) connections need a haproxy template change to work correctly\n1860774 - csr for vSphere egress nodes were not approved automatically during cert renewal\n1878106 - token inactivity timeout is not shortened after oauthclient/oauth config values are lowered\n1878925 - \u0027oc adm upgrade --to ...\u0027 rejects versions which occur only in history, while the cluster-version operator supports history fallback\n1880738 - origin e2e test deletes original worker\n1882983 - oVirt csi driver should refuse to provision RWX and ROX PV\n1886450 - Keepalived router id check not documented for RHV/VMware IPI\n1889488 - The metrics endpoint for the Scheduler is not protected by RBAC\n1894431 - Router pods fail to boot if the SSL certificate applied is missing an empty line at the bottom\n1896474 - Path based routing is broken for some combinations\n1897431 - CIDR support for additional network attachment with the bridge CNI plug-in\n1903408 - NodePort externalTrafficPolicy does not work for ovn-kubernetes\n1907433 - Excessive logging in image operator\n1909906 - The router fails with PANIC error when stats port already in use\n1911173 - [MSTR-998] Many charts\u0027 legend names show {{}} instead of words\n1914053 - pods assigned with Multus whereabouts IP get stuck in ContainerCreating state after node rebooting. \n1916169 - a reboot while MCO is applying changes leaves the node in undesirable state and MCP looks fine (UPDATED=true)\n1917893 - [ovirt] install fails: due to terraform error \"Cannot attach Virtual Disk: Disk is locked\" on vm resource\n1921627 - GCP UPI installation failed due to exceeding gcp limitation of instance group name\n1921650 - CVE-2021-3121 gogo/protobuf: plugin/unmarshal/unmarshal.go lacks certain index validation\n1926522 - oc adm catalog does not clean temporary files\n1927478 - Default CatalogSources deployed by marketplace do not have toleration for tainted nodes. \n1928141 - kube-storage-version-migrator constantly reporting type \"Upgradeable\" status Unknown\n1928285 - [LSO][OCS][arbiter] OCP Console shows no results while in fact underlying setup of LSO localvolumeset and it\u0027s storageclass is not yet finished, confusing users\n1931594 - [sig-cli] oc --request-timeout works as expected fails frequently on s390x\n1933847 - Prometheus goes unavailable (both instances down) during 4.8 upgrade\n1937085 - RHV UPI inventory playbook missing guarantee_memory\n1937196 - [aws ebs csi driver] events for block volume expansion may cause confusion\n1938236 - vsphere-problem-detector does not support overriding log levels via storage CR\n1939401 - missed labels for CMO/openshift-state-metric/telemeter-client/thanos-querier pods\n1939435 - Setting an IPv6 address in noProxy field causes error in openshift installer\n1939552 - [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]\n1942913 - ThanosSidecarUnhealthy isn\u0027t resilient to WAL replays. \n1943363 - [ovn] CNO should gracefully terminate ovn-northd\n1945274 - ostree-finalize-staged.service failed while upgrading a rhcos node to 4.6.17\n1948080 - authentication should not set Available=False APIServices_Error with 503s\n1949262 - Prometheus Statefulsets should have 2 replicas and hard affinity set\n1949672 - [GCP] Update 4.8 UPI template to match ignition version: 3.2.0\n1950827 - [LSO] localvolumediscoveryresult name is not friendly to customer\n1952576 - csv_succeeded metric not present in olm-operator for all successful CSVs\n1953264 - \"remote error: tls: bad certificate\" logs in prometheus-operator container\n1955300 - Machine config operator reports unavailable for 23m during upgrade\n1955489 - Alertmanager Statefulsets should have 2 replicas and hard affinity set\n1955490 - Thanos ruler Statefulsets should have 2 replicas and hard affinity set\n1955544 - [IPI][OSP] densed master-only installation with 0 workers fails due to missing worker security group on masters\n1956496 - Needs SR-IOV Docs Upstream\n1956739 - Permission for authorized_keys for core user changes from core user to root when changed the pull secret\n1956776 - [vSphere] Installer should do pre-check to ensure user-provided network name is valid\n1956964 - upload a boot-source to OpenShift virtualization using the console\n1957547 - [RFE]VM name is not auto filled in dev console\n1958349 - ovn-controller doesn\u0027t release the memory after cluster-density run\n1959352 - [scale] failed to get pod annotation: timed out waiting for annotations\n1960378 - icsp allows mirroring of registry root - install-config imageContentSources does not\n1960674 - Broken test: [sig-imageregistry][Serial][Suite:openshift/registry/serial] Image signature workflow can push a signed image to openshift registry and verify it [Suite:openshift/conformance/serial]\n1961317 - storage ClusterOperator does not declare ClusterRoleBindings in relatedObjects\n1961391 - String updates\n1961509 - DHCP daemon pod should have CPU and memory requests set but not limits\n1962066 - Edit machine/machineset specs not working\n1962206 - openshift-multus/dhcp-daemon set should meet platform requirements for update strategy that have maxUnavailable update of 10 or 33 percent\n1963053 - `oc whoami --show-console` should show the web console URL, not the server api URL\n1964112 - route SimpleAllocationPlugin: host name validation errors: spec.host: Invalid value: ... must be no more than 63 characters\n1964327 - Support containers with name:tag@digest\n1964789 - Send keys and disconnect does not work for VNC console\n1965368 - ClusterQuotaAdmission received non-meta object - message constantly reported in OpenShift Container Platform 4.7\n1966445 - Unmasking a service doesn\u0027t work if it masked using MCO\n1966477 - Use GA version in KAS/OAS/OauthAS to avoid: \"audit.k8s.io/v1beta1\" is deprecated and will be removed in a future release, use \"audit.k8s.io/v1\" instead\n1966521 - kube-proxy\u0027s userspace implementation consumes excessive CPU\n1968364 - [Azure] when using ssh type ed25519 bootstrap fails to come up\n1970021 - nmstate does not persist its configuration due to overlay systemd-connections-merged mount\n1970218 - MCO writes incorrect file contents if compression field is specified\n1970331 - [sig-auth][Feature:SCC][Early] should not have pod creation failures during install [Suite:openshift/conformance/parallel]\n1970805 - Cannot create build when docker image url contains dir structure\n1972033 - [azure] PV region node affinity is failure-domain.beta.kubernetes.io instead of topology.kubernetes.io\n1972827 - image registry does not remain available during upgrade\n1972962 - Should set the minimum value for the `--max-icsp-size` flag of `oc adm catalog mirror`\n1973447 - ovn-dbchecker peak memory spikes to ~500MiB during cluster-density run\n1975826 - ovn-kubernetes host directed traffic cannot be offloaded as CT zone 64000 is not established\n1976301 - [ci] e2e-azure-upi is permafailing\n1976399 - During the upgrade from OpenShift 4.5 to OpenShift 4.6 the election timers for the OVN north and south databases did not change. \n1976674 - CCO didn\u0027t set Upgradeable to False when cco mode is configured to Manual on azure platform\n1976894 - Unidling a StatefulSet does not work as expected\n1977319 - [Hive] Remove stale cruft installed by CVO in earlier releases\n1977414 - Build Config timed out waiting for condition 400: Bad Request\n1977929 - [RFE] Display Network Attachment Definitions from openshift-multus namespace during OCS deployment via UI using Multus\n1978528 - systemd-coredump started and failed intermittently for unknown reasons\n1978581 - machine-config-operator: remove runlevel from mco namespace\n1979562 - Cluster operators: don\u0027t show messages when neither progressing, degraded or unavailable\n1979962 - AWS SDN Network Stress tests have not passed in 4.9 release-openshift-origin-installer-e2e-aws-sdn-network-stress-4.9\n1979966 - OCP builds always fail when run on RHEL7 nodes\n1981396 - Deleting pool inside pool page the pool stays in Ready phase in the heading\n1981549 - Machine-config daemon does not recover from broken Proxy configuration\n1981867 - [sig-cli] oc explain should contain proper fields description for special types [Suite:openshift/conformance/parallel]\n1981941 - Terraform upgrade required in openshift-installer to resolve multiple issues\n1982063 - \u0027Control Plane\u0027 is not translated in Simplified Chinese language in Home-\u003eOverview page\n1982498 - Default registry credential path should be adjusted to use containers/auth.json for oc commands\n1982662 - Workloads - DaemonSets - Add storage: i18n misses\n1982726 - kube-apiserver audit logs show a lot of 404 errors for DELETE \"*/secrets/encryption-config\" on single node clusters\n1983758 - upgrades are failing on disruptive tests\n1983964 - Need Device plugin configuration for the NIC \"needVhostNet\" \u0026 \"isRdma\"\n1984592 - global pull secret not working in OCP4.7.4+ for additional private registries\n1985073 - new-in-4.8 ExtremelyHighIndividualControlPlaneCPU fires on some GCP update jobs\n1985486 - Cluster Proxy not used during installation on OSP with Kuryr\n1985724 - VM Details Page missing translations\n1985838 - [OVN] CNO exportNetworkFlows does not clear collectors when deleted\n1985933 - Downstream image registry recommendation\n1985965 - oVirt CSI driver does not report volume stats\n1986216 - [scale] SNO: Slow Pod recovery due to \"timed out waiting for OVS port binding\"\n1986237 - \"MachineNotYetDeleted\" in Pending state , alert not fired\n1986239 - crictl create fails with \"PID namespace requested, but sandbox infra container invalid\"\n1986302 - console continues to fetch prometheus alert and silences for normal user\n1986314 - Current MTV installation for KubeVirt import flow creates unusable Forklift UI\n1986338 - error creating list of resources in Import YAML\n1986502 - yaml multi file dnd duplicates previous dragged files\n1986819 - fix string typos for hot-plug disks\n1987044 - [OCPV48] Shutoff VM is being shown as \"Starting\" in WebUI when using spec.runStrategy Manual/RerunOnFailure\n1987136 - Declare operatorframework.io/arch.* labels for all operators\n1987257 - Go-http-client user-agent being used for oc adm mirror requests\n1987263 - fsSpaceFillingUpWarningThreshold not aligned to Kubernetes Garbage Collection Threshold\n1987445 - MetalLB integration: All gateway routers in the cluster answer ARP requests for LoadBalancer services IP\n1988406 - SSH key dropped when selecting \"Customize virtual machine\" in UI\n1988440 - Network operator changes ovnkube-config too early causing ovnkube-master pods to crashloop during cluster upgrade\n1988483 - Azure drop ICMP need to frag FRAG when using OVN: openshift-apiserver becomes False after env runs some time due to communication between one master to pods on another master fails with \"Unable to connect to the server\"\n1988879 - Virtual media based deployment fails on Dell servers due to pending Lifecycle Controller jobs\n1989438 - expected replicas is wrong\n1989502 - Developer Catalog is disappearing after short time\n1989843 - \u0027More\u0027 and \u0027Show Less\u0027 functions are not translated on several page\n1990014 - oc debug \u003cpod-name\u003e does not work for Windows pods\n1990190 - e2e testing failed with basic manifest: reason/ExternalProvisioning waiting for a volume to be created\n1990193 - \u0027more\u0027 and \u0027Show Less\u0027 is not being translated on Home -\u003e Search page\n1990255 - Partial or all of the Nodes/StorageClasses don\u0027t appear back on UI after text is removed from search bar\n1990489 - etcdHighNumberOfFailedGRPCRequests fires only on metal env in CI\n1990506 - Missing udev rules in initramfs for /dev/disk/by-id/scsi-* symlinks\n1990556 - get-resources.sh doesn\u0027t honor the no_proxy settings even with no_proxy var\n1990625 - Ironic agent registers with SLAAC address with privacy-stable\n1990635 - CVO does not recognize the channel change if desired version and channel changed at the same time\n1991067 - github.com can not be resolved inside pods where cluster is running on openstack. \n1991573 - Enable typescript strictNullCheck on network-policies files\n1991641 - Baremetal Cluster Operator still Available After Delete Provisioning\n1991770 - The logLevel and operatorLogLevel values do not work with Cloud Credential Operator\n1991819 - Misspelled word \"ocurred\" in oc inspect cmd\n1991942 - Alignment and spacing fixes\n1992414 - Two rootdisks show on storage step if \u0027This is a CD-ROM boot source\u0027 is checked\n1992453 - The configMap failed to save on VM environment tab\n1992466 - The button \u0027Save\u0027 and \u0027Reload\u0027 are not translated on vm environment tab\n1992475 - The button \u0027Open console in New Window\u0027 and \u0027Disconnect\u0027 are not translated on vm console tab\n1992509 - Could not customize boot source due to source PVC not found\n1992541 - all the alert rules\u0027 annotations \"summary\" and \"description\" should comply with the OpenShift alerting guidelines\n1992580 - storageProfile should stay with the same value by check/uncheck the apply button\n1992592 - list-type missing in oauth.config.openshift.io for identityProviders breaking Server Side Apply\n1992777 - [IBMCLOUD] Default \"ibm_iam_authorization_policy\" is not working as expected in all scenarios\n1993364 - cluster destruction fails to remove router in BYON with Kuryr as primary network (even after BZ 1940159 got fixed)\n1993376 - periodic-ci-openshift-release-master-ci-4.6-upgrade-from-stable-4.5-e2e-azure-upgrade is permfailing\n1994094 - Some hardcodes are detected at the code level in OpenShift console components\n1994142 - Missing required cloud config fields for IBM Cloud\n1994733 - MetalLB: IP address is not assigned to service if there is duplicate IP address in two address pools\n1995021 - resolv.conf and corefile sync slows down/stops after keepalived container restart\n1995335 - [SCALE] ovnkube CNI: remove ovs flows check\n1995493 - Add Secret to workload button and Actions button are not aligned on secret details page\n1995531 - Create RDO-based Ironic image to be promoted to OKD\n1995545 - Project drop-down amalgamates inside main screen while creating storage system for odf-operator\n1995887 - [OVN]After reboot egress node, lr-policy-list was not correct, some duplicate records or missed internal IPs\n1995924 - CMO should report `Upgradeable: false` when HA workload is incorrectly spread\n1996023 - kubernetes.io/hostname values are larger than filter when create localvolumeset from webconsole\n1996108 - Allow backwards compatibility of shared gateway mode to inject host-based routes into OVN\n1996624 - 100% of the cco-metrics/cco-metrics targets in openshift-cloud-credential-operator namespace are down\n1996630 - Fail to delete the first Authorized SSH Key input box on Advanced page\n1996647 - Provide more useful degraded message in auth operator on DNS errors\n1996736 - Large number of 501 lr-policies in INCI2 env\n1996886 - timedout waiting for flows during pod creation and ovn-controller pegged on worker nodes\n1996916 - Special Resource Operator(SRO) - Fail to deploy simple-kmod on GCP\n1996928 - Enable default operator indexes on ARM\n1997028 - prometheus-operator update removes env var support for thanos-sidecar\n1997059 - Failed to create cluster in AWS us-east-1 region due to a local zone is used\n1997226 - Ingresscontroller reconcilations failing but not shown in operator logs or status of ingresscontroller. \n1997245 - \"Subscription already exists in openshift-storage namespace\" error message is seen while installing odf-operator via UI\n1997269 - Have to refresh console to install kube-descheduler\n1997478 - Storage operator is not available after reboot cluster instances\n1997509 - flake: [sig-cli] oc builds new-build [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n1997967 - storageClass is not reserved from default wizard to customize wizard\n1998035 - openstack IPI CI: custom var-lib-etcd.mount (ramdisk) unit is racing due to incomplete After/Before order\n1998038 - [e2e][automation] add tests for UI for VM disk hot-plug\n1998087 - Fix CephHealthCheck wrapping contents and add data-tests for HealthItem and SecondaryStatus\n1998174 - Create storageclass gp3-csi after install ocp cluster on aws\n1998183 - \"r: Bad Gateway\" info is improper\n1998235 - Firefox warning: Cookie \u201ccsrf-token\u201d will be soon rejected\n1998377 - Filesystem table head is not full displayed in disk tab\n1998378 - Virtual Machine is \u0027Not available\u0027 in Home -\u003e Overview -\u003e Cluster inventory\n1998519 - Add fstype when create localvolumeset instance on web console\n1998951 - Keepalived conf ingress peer on in Dual stack cluster contains both IPv6 and IPv4 addresses\n1999076 - [UI] Page Not Found error when clicking on Storage link provided in Overview page\n1999079 - creating pods before sriovnetworknodepolicy sync up succeed will cause node unschedulable\n1999091 - Console update toast notification can appear multiple times\n1999133 - removing and recreating static pod manifest leaves pod in error state\n1999246 - .indexignore is not ingore when oc command load dc configuration\n1999250 - ArgoCD in GitOps operator can\u0027t manage namespaces\n1999255 - ovnkube-node always crashes out the first time it starts\n1999261 - ovnkube-node log spam (and security token leak?)\n1999309 - While installing odf-operator via UI, web console update pop-up navigates to OperatorHub -\u003e Operator Installation page\n1999314 - console-operator is slow to mark Degraded as False once console starts working\n1999425 - kube-apiserver with \"[SHOULD NOT HAPPEN] failed to update managedFields\" err=\"failed to convert new object (machine.openshift.io/v1beta1, Kind=MachineHealthCheck)\n1999556 - \"master\" pool should be updated before the CVO reports available at the new version occurred\n1999578 - AWS EFS CSI tests are constantly failing\n1999603 - Memory Manager allows Guaranteed QoS Pod with hugepages requested is exactly equal to the left over Hugepages\n1999619 - cloudinit is malformatted if a user sets a password during VM creation flow\n1999621 - Empty ssh_authorized_keys entry is added to VM\u0027s cloudinit if created from a customize flow\n1999649 - MetalLB: Only one type of IP address can be assigned to service on dual stack cluster from a address pool that have both IPv4 and IPv6 addresses defined\n1999668 - openshift-install destroy cluster panic\u0027s when given invalid credentials to cloud provider (Azure Stack Hub)\n1999734 - IBM Cloud CIS Instance CRN missing in infrastructure manifest/resource\n1999771 - revert \"force cert rotation every couple days for development\" in 4.10\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n1999796 - Openshift Console `Helm` tab is not showing helm releases in a namespace when there is high number of deployments in the same namespace. \n1999836 - Admin web-console inconsistent status summary of sparse ClusterOperator conditions\n1999903 - Click \"This is a CD-ROM boot source\" ticking \"Use template size PVC\" on pvc upload form\n1999983 - No way to clear upload error from template boot source\n2000081 - [IPI baremetal] The metal3 pod failed to restart when switching from Disabled to Managed provisioning without specifying provisioningInterface parameter\n2000096 - Git URL is not re-validated on edit build-config form reload\n2000216 - Successfully imported ImageStreams are not resolved in DeploymentConfig\n2000236 - Confusing usage message from dynkeepalived CLI\n2000268 - Mark cluster unupgradable if vcenter, esxi versions or HW versions are unsupported\n2000430 - bump cluster-api-provider-ovirt version in installer\n2000450 - 4.10: Enable static PV multi-az test\n2000490 - All critical alerts shipped by CMO should have links to a runbook\n2000521 - Kube-apiserver CO degraded due to failed conditional check (ConfigObservationDegraded)\n2000573 - Incorrect StorageCluster CR created and ODF cluster getting installed with 2 Zone OCP cluster\n2000628 - ibm-flashsystem-storage-storagesystem got created without any warning even when the attempt was cancelled\n2000651 - ImageStreamTag alias results in wrong tag and invalid link in Web Console\n2000754 - IPerf2 tests should be lower\n2000846 - Structure logs in the entire codebase of Local Storage Operator\n2000872 - [tracker] container is not able to list on some directories within the nfs after upgrade to 4.7.24\n2000877 - OCP ignores STOPSIGNAL in Dockerfile and sends SIGTERM\n2000938 - CVO does not respect changes to a Deployment strategy\n2000963 - \u0027Inline-volume (default fs)] volumes should store data\u0027 tests are failing on OKD with updated selinux-policy\n2001008 - [MachineSets] CloneMode defaults to linkedClone, but I don\u0027t have snapshot and should be fullClone\n2001240 - Remove response headers for downloads of binaries from OpenShift WebConsole\n2001295 - Remove openshift:kubevirt-machine-controllers decleration from machine-api\n2001317 - OCP Platform Quota Check - Inaccurate MissingQuota error\n2001337 - Details Card in ODF Dashboard mentions OCS\n2001339 - fix text content hotplug\n2001413 - [e2e][automation] add/delete nic and disk to template\n2001441 - Test: oc adm must-gather runs successfully for audit logs - fail due to startup log\n2001442 - Empty termination.log file for the kube-apiserver has too permissive mode\n2001479 - IBM Cloud DNS unable to create/update records\n2001566 - Enable alerts for prometheus operator in UWM\n2001575 - Clicking on the perspective switcher shows a white page with loader\n2001577 - Quick search placeholder is not displayed properly when the search string is removed\n2001578 - [e2e][automation] add tests for vm dashboard tab\n2001605 - PVs remain in Released state for a long time after the claim is deleted\n2001617 - BucketClass Creation is restricted on 1st page but enabled using side navigation options\n2001620 - Cluster becomes degraded if it can\u0027t talk to Manila\n2001760 - While creating \u0027Backing Store\u0027, \u0027Bucket Class\u0027, \u0027Namespace Store\u0027 user is navigated to \u0027Installed Operators\u0027 page after clicking on ODF\n2001761 - Unable to apply cluster operator storage for SNO on GCP platform. \n2001765 - Some error message in the log of diskmaker-manager caused confusion\n2001784 - show loading page before final results instead of showing a transient message No log files exist\n2001804 - Reload feature on Environment section in Build Config form does not work properly\n2001810 - cluster admin unable to view BuildConfigs in all namespaces\n2001817 - Failed to load RoleBindings list that will lead to \u2018Role name\u2019 is not able to be selected on Create RoleBinding page as well\n2001823 - OCM controller must update operator status\n2001825 - [SNO]ingress/authentication clusteroperator degraded when enable ccm from start\n2001835 - Could not select image tag version when create app from dev console\n2001855 - Add capacity is disabled for ocs-storagecluster\n2001856 - Repeating event: MissingVersion no image found for operand pod\n2001959 - Side nav list borders don\u0027t extend to edges of container\n2002007 - Layout issue on \"Something went wrong\" page\n2002010 - ovn-kube may never attempt to retry a pod creation\n2002012 - Cannot change volume mode when cloning a VM from a template\n2002027 - Two instances of Dotnet helm chart show as one in topology\n2002075 - opm render does not automatically pulling in the image(s) used in the deployments\n2002121 - [OVN] upgrades failed for IPI OSP16 OVN IPSec cluster\n2002125 - Network policy details page heading should be updated to Network Policy details\n2002133 - [e2e][automation] add support/virtualization and improve deleteResource\n2002134 - [e2e][automation] add test to verify vm details tab\n2002215 - Multipath day1 not working on s390x\n2002238 - Image stream tag is not persisted when switching from yaml to form editor\n2002262 - [vSphere] Incorrect user agent in vCenter sessions list\n2002266 - SinkBinding create form doesn\u0027t allow to use subject name, instead of label selector\n2002276 - OLM fails to upgrade operators immediately\n2002300 - Altering the Schedule Profile configurations doesn\u0027t affect the placement of the pods\n2002354 - Missing DU configuration \"Done\" status reporting during ZTP flow\n2002362 - Dynamic Plugin - ConsoleRemotePlugin for webpack doesn\u0027t use commonjs\n2002368 - samples should not go degraded when image allowedRegistries blocks imagestream creation\n2002372 - Pod creation failed due to mismatched pod IP address in CNI and OVN\n2002397 - Resources search is inconsistent\n2002434 - CRI-O leaks some children PIDs\n2002443 - Getting undefined error on create local volume set page\n2002461 - DNS operator performs spurious updates in response to API\u0027s defaulting of service\u0027s internalTrafficPolicy\n2002504 - When the openshift-cluster-storage-operator is degraded because of \"VSphereProblemDetectorController_SyncError\", the insights operator is not sending the logs from all pods. \n2002559 - User preference for topology list view does not follow when a new namespace is created\n2002567 - Upstream SR-IOV worker doc has broken links\n2002588 - Change text to be sentence case to align with PF\n2002657 - ovn-kube egress IP monitoring is using a random port over the node network\n2002713 - CNO: OVN logs should have millisecond resolution\n2002748 - [ICNI2] \u0027ErrorAddingLogicalPort\u0027 failed to handle external GW check: timeout waiting for namespace event\n2002759 - Custom profile should not allow not including at least one required HTTP2 ciphersuite\n2002763 - Two storage systems getting created with external mode RHCS\n2002808 - KCM does not use web identity credentials\n2002834 - Cluster-version operator does not remove unrecognized volume mounts\n2002896 - Incorrect result return when user filter data by name on search page\n2002950 - Why spec.containers.command is not created with \"oc create deploymentconfig \u003cdc-name\u003e --image=\u003cimage\u003e -- \u003ccommand\u003e\"\n2003096 - [e2e][automation] check bootsource URL is displaying on review step\n2003113 - OpenShift Baremetal IPI installer uses first three defined nodes under hosts in install-config for master nodes instead of filtering the hosts with the master role\n2003120 - CI: Uncaught error with ResizeObserver on operand details page\n2003145 - Duplicate operand tab titles causes \"two children with the same key\" warning\n2003164 - OLM, fatal error: concurrent map writes\n2003178 - [FLAKE][knative] The UI doesn\u0027t show updated traffic distribution after accepting the form\n2003193 - Kubelet/crio leaks netns and veth ports in the host\n2003195 - OVN CNI should ensure host veths are removed\n2003204 - Jenkins all new container images (openshift4/ose-jenkins) not supporting \u0027-e JENKINS_PASSWORD=password\u0027 ENV which was working for old container images\n2003206 - Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2003239 - \"[sig-builds][Feature:Builds][Slow] can use private repositories as build input\" tests fail outside of CI\n2003244 - Revert libovsdb client code\n2003251 - Patternfly components with list element has list item bullet when they should not. \n2003252 - \"[sig-builds][Feature:Builds][Slow] starting a build using CLI start-build test context override environment BUILD_LOGLEVEL in buildconfig\" tests do not work as expected outside of CI\n2003269 - Rejected pods should be filtered from admission regression\n2003357 - QE- Removing the epic tags for gherkin tags related to 4.9 Release\n2003426 - [e2e][automation] add test for vm details bootorder\n2003496 - [e2e][automation] add test for vm resources requirment settings\n2003641 - All metal ipi jobs are failing in 4.10\n2003651 - ODF4.9+LSO4.8 installation via UI, StorageCluster move to error state\n2003655 - [IPI ON-PREM] Keepalived chk_default_ingress track script failed even though default router pod runs on node\n2003683 - Samples operator is panicking in CI\n2003711 - [UI] Empty file ceph-external-cluster-details-exporter.py downloaded from external cluster \"Connection Details\" page\n2003715 - Error on creating local volume set after selection of the volume mode\n2003743 - Remove workaround keeping /boot RW for kdump support\n2003775 - etcd pod on CrashLoopBackOff after master replacement procedure\n2003788 - CSR reconciler report error constantly when BYOH CSR approved by other Approver\n2003792 - Monitoring metrics query graph flyover panel is useless\n2003808 - Add Sprint 207 translations\n2003845 - Project admin cannot access image vulnerabilities view\n2003859 - sdn emits events with garbage messages\n2003896 - (release-4.10) ApiRequestCounts conditional gatherer\n2004009 - 4.10: Fix multi-az zone scheduling e2e for 5 control plane replicas\n2004051 - CMO can report as being Degraded while node-exporter is deployed on all nodes\n2004059 - [e2e][automation] fix current tests for downstream\n2004060 - Trying to use basic spring boot sample causes crash on Firefox\n2004101 - [UI] When creating storageSystem deployment type dropdown under advanced setting doesn\u0027t close after selection\n2004127 - [flake] openshift-controller-manager event reason/SuccessfulDelete occurs too frequently\n2004203 - build config\u0027s created prior to 4.8 with image change triggers can result in trigger storm in OCM/openshift-apiserver\n2004313 - [RHOCP 4.9.0-rc.0] Failing to deploy Azure cluster from the macOS installer - ignition_bootstrap.ign: no such file or directory\n2004449 - Boot option recovery menu prevents image boot\n2004451 - The backup filename displayed in the RecentBackup message is incorrect\n2004459 - QE - Modified the AddFlow gherkin scripts and automation scripts\n2004508 - TuneD issues with the recent ConfigParser changes. \n2004510 - openshift-gitops operator hooks gets unauthorized (401) errors during jobs executions\n2004542 - [osp][octavia lb] cannot create LoadBalancer type svcs\n2004578 - Monitoring and node labels missing for an external storage platform\n2004585 - prometheus-k8s-0 cpu usage keeps increasing for the first 3 days\n2004596 - [4.10] Bootimage bump tracker\n2004597 - Duplicate ramdisk log containers running\n2004600 - Duplicate ramdisk log containers running\n2004609 - output of \"crictl inspectp\" is not complete\n2004625 - BMC credentials could be logged if they change\n2004632 - When LE takes a large amount of time, multiple whereabouts are seen\n2004721 - ptp/worker custom threshold doesn\u0027t change ptp events threshold\n2004736 - [knative] Create button on new Broker form is inactive despite form being filled\n2004796 - [e2e][automation] add test for vm scheduling policy\n2004814 - (release-4.10) OCM controller - change type of the etc-pki-entitlement secret to opaque\n2004870 - [External Mode] Insufficient spacing along y-axis in RGW Latency Performance Card\n2004901 - [e2e][automation] improve kubevirt devconsole tests\n2004962 - Console frontend job consuming too much CPU in CI\n2005014 - state of ODF StorageSystem is misreported during installation or uninstallation\n2005052 - Adding a MachineSet selector matchLabel causes orphaned Machines\n2005179 - pods status filter is not taking effect\n2005182 - sync list of deprecated apis about to be removed\n2005282 - Storage cluster name is given as title in StorageSystem details page\n2005355 - setuptools 58 makes Kuryr CI fail\n2005407 - ClusterNotUpgradeable Alert should be set to Severity Info\n2005415 - PTP operator with sidecar api configured throws bind: address already in use\n2005507 - SNO spoke cluster failing to reach coreos.live.rootfs_url is missing url in console\n2005554 - The switch status of the button \"Show default project\" is not revealed correctly in code\n2005581 - 4.8.12 to 4.9 upgrade hung due to cluster-version-operator pod CrashLoopBackOff: error creating clients: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable\n2005761 - QE - Implementing crw-basic feature file\n2005783 - Fix accessibility issues in the \"Internal\" and \"Internal - Attached Mode\" Installation Flow\n2005811 - vSphere Problem Detector operator - ServerFaultCode: InvalidProperty\n2005854 - SSH NodePort service is created for each VM\n2005901 - KS, KCM and KA going Degraded during master nodes upgrade\n2005902 - Current UI flow for MCG only deployment is confusing and doesn\u0027t reciprocate any message to the end-user\n2005926 - PTP operator NodeOutOfPTPSync rule is using max offset from the master instead of openshift_ptp_clock_state metrics\n2005971 - Change telemeter to report the Application Services product usage metrics\n2005997 - SELinux domain container_logreader_t does not have a policy to follow sym links for log files\n2006025 - Description to use an existing StorageClass while creating StorageSystem needs to be re-phrased\n2006060 - ocs-storagecluster-storagesystem details are missing on UI for MCG Only and MCG only in LSO mode deployment types\n2006101 - Power off fails for drivers that don\u0027t support Soft power off\n2006243 - Metal IPI upgrade jobs are running out of disk space\n2006291 - bootstrapProvisioningIP set incorrectly when provisioningNetworkCIDR doesn\u0027t use the 0th address\n2006308 - Backing Store YAML tab on click displays a blank screen on UI\n2006325 - Multicast is broken across nodes\n2006329 - Console only allows Web Terminal Operator to be installed in OpenShift Operators\n2006364 - IBM Cloud: Set resourceGroupId for resourceGroups, not simply resource\n2006561 - [sig-instrumentation] Prometheus when installed on the cluster shouldn\u0027t have failing rules evaluation [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2006690 - OS boot failure \"x64 Exception Type 06 - Invalid Opcode Exception\"\n2006714 - add retry for etcd errors in kube-apiserver\n2006767 - KubePodCrashLooping may not fire\n2006803 - Set CoreDNS cache entries for forwarded zones\n2006861 - Add Sprint 207 part 2 translations\n2006945 - race condition can cause crashlooping bootstrap kube-apiserver in cluster-bootstrap\n2006947 - e2e-aws-proxy for 4.10 is permafailing with samples operator errors\n2006975 - clusteroperator/etcd status condition should not change reasons frequently due to EtcdEndpointsDegraded\n2007085 - Intermittent failure mounting /run/media/iso when booting live ISO from USB stick\n2007136 - Creation of BackingStore, BucketClass, NamespaceStore fails\n2007271 - CI Integration for Knative test cases\n2007289 - kubevirt tests are failing in CI\n2007322 - Devfile/Dockerfile import does not work for unsupported git host\n2007328 - Updated patternfly to v4.125.3 and pf.quickstarts to v1.2.3. \n2007379 - Events are not generated for master offset for ordinary clock\n2007443 - [ICNI 2.0] Loadbalancer pods do not establish BFD sessions with all workers that host pods for the routed namespace\n2007455 - cluster-etcd-operator: render command should fail if machineCidr contains reserved address\n2007495 - Large label value for the metric kubelet_started_pods_errors_total with label message when there is a error\n2007522 - No new local-storage-operator-metadata-container is build for 4.10\n2007551 - No new ose-aws-efs-csi-driver-operator-bundle-container is build for 4.10\n2007580 - Azure cilium installs are failing e2e tests\n2007581 - Too many haproxy processes in default-router pod causing high load average after upgrade from v4.8.3 to v4.8.10\n2007677 - Regression: core container io performance metrics are missing for pod, qos, and system slices on nodes\n2007692 - 4.9 \"old-rhcos\" jobs are permafailing with storage test failures\n2007710 - ci/prow/e2e-agnostic-cmd job is failing on prow\n2007757 - must-gather extracts imagestreams in the \"openshift\" namespace, but not Templates\n2007802 - AWS machine actuator get stuck if machine is completely missing\n2008096 - TestAWSFinalizerDeleteS3Bucket sometimes fails to teardown operator\n2008119 - The serviceAccountIssuer field on Authentication CR is reseted to \u201c\u201d when installation process\n2008151 - Topology breaks on clicking in empty state\n2008185 - Console operator go.mod should use go 1.16.version\n2008201 - openstack-az job is failing on haproxy idle test\n2008207 - vsphere CSI driver doesn\u0027t set resource limits\n2008223 - gather_audit_logs: fix oc command line to get the current audit profile\n2008235 - The Save button in the Edit DC form remains disabled\n2008256 - Update Internationalization README with scope info\n2008321 - Add correct documentation link for MON_DISK_LOW\n2008462 - Disable PodSecurity feature gate for 4.10\n2008490 - Backing store details page does not contain all the kebab actions. \n2008521 - gcp-hostname service should correct invalid search entries in resolv.conf\n2008532 - CreateContainerConfigError:: failed to prepare subPath for volumeMount\n2008539 - Registry doesn\u0027t fall back to secondary ImageContentSourcePolicy Mirror\n2008540 - HighlyAvailableWorkloadIncorrectlySpread always fires on upgrade on cluster with two workers\n2008599 - Azure Stack UPI does not have Internal Load Balancer\n2008612 - Plugin asset proxy does not pass through browser cache headers\n2008712 - VPA webhook timeout prevents all pods from starting\n2008733 - kube-scheduler: exposed /debug/pprof port\n2008911 - Prometheus repeatedly scaling prometheus-operator replica set\n2008926 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2008987 - OpenShift SDN Hosted Egress IP\u0027s are not being scheduled to nodes after upgrade to 4.8.12\n2009055 - Instances of OCS to be replaced with ODF on UI\n2009078 - NetworkPodsCrashLooping alerts in upgrade CI jobs\n2009083 - opm blocks pruning of existing bundles during add\n2009111 - [IPI-on-GCP] \u0027Install a cluster with nested virtualization enabled\u0027 failed due to unable to launch compute instances\n2009131 - [e2e][automation] add more test about vmi\n2009148 - [e2e][automation] test vm nic presets and options\n2009233 - ACM policy object generated by PolicyGen conflicting with OLM Operator\n2009253 - [BM] [IPI] [DualStack] apiVIP and ingressVIP should be of the same primary IP family\n2009298 - Service created for VM SSH access is not owned by the VM and thus is not deleted if the VM is deleted\n2009384 - UI changes to support BindableKinds CRD changes\n2009404 - ovnkube-node pod enters CrashLoopBackOff after OVN_IMAGE is swapped\n2009424 - Deployment upgrade is failing availability check\n2009454 - Change web terminal subscription permissions from get to list\n2009465 - container-selinux should come from rhel8-appstream\n2009514 - Bump OVS to 2.16-15\n2009555 - Supermicro X11 system not booting from vMedia with AI\n2009623 - Console: Observe \u003e Metrics page: Table pagination menu shows bullet points\n2009664 - Git Import: Edit of knative service doesn\u0027t work as expected for git import flow\n2009699 - Failure to validate flavor RAM\n2009754 - Footer is not sticky anymore in import forms\n2009785 - CRI-O\u0027s version file should be pinned by MCO\n2009791 - Installer: ibmcloud ignores install-config values\n2009823 - [sig-arch] events should not repeat pathologically - reason/VSphereOlderVersionDetected Marking cluster un-upgradeable because one or more VMs are on hardware version vmx-13\n2009840 - cannot build extensions on aarch64 because of unavailability of rhel-8-advanced-virt repo\n2009859 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2009873 - Stale Logical Router Policies and Annotations for a given node\n2009879 - There should be test-suite coverage to ensure admin-acks work as expected\n2009888 - SRO package name collision between official and community version\n2010073 - uninstalling and then reinstalling sriov-network-operator is not working\n2010174 - 2 PVs get created unexpectedly with different paths that actually refer to the same device on the node. \n2010181 - Environment variables not getting reset on reload on deployment edit form\n2010310 - [sig-instrumentation][Late] OpenShift alerting rules should have description and summary annotations [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2010341 - OpenShift Alerting Rules Style-Guide Compliance\n2010342 - Local console builds can have out of memory errors\n2010345 - OpenShift Alerting Rules Style-Guide Compliance\n2010348 - Reverts PIE build mode for K8S components\n2010352 - OpenShift Alerting Rules Style-Guide Compliance\n2010354 - OpenShift Alerting Rules Style-Guide Compliance\n2010359 - OpenShift Alerting Rules Style-Guide Compliance\n2010368 - OpenShift Alerting Rules Style-Guide Compliance\n2010376 - OpenShift Alerting Rules Style-Guide Compliance\n2010662 - Cluster is unhealthy after image-registry-operator tests\n2010663 - OpenShift Alerting Rules Style-Guide Compliance (ovn-kubernetes subcomponent)\n2010665 - Bootkube tries to use oc after cluster bootstrap is done and there is no API\n2010698 - [BM] [IPI] [Dual Stack] Installer must ensure ipv6 short forms too if clusterprovisioning IP is specified as ipv6 address\n2010719 - etcdHighNumberOfFailedGRPCRequests runbook is missing\n2010864 - Failure building EFS operator\n2010910 - ptp worker events unable to identify interface for multiple interfaces\n2010911 - RenderOperatingSystem() returns wrong OS version on OCP 4.7.24\n2010921 - Azure Stack Hub does not handle additionalTrustBundle\n2010931 - SRO CSV uses non default category \"Drivers and plugins\"\n2010946 - concurrent CRD from ovirt-csi-driver-operator gets reconciled by CVO after deployment, changing CR as well. \n2011038 - optional operator conditions are confusing\n2011063 - CVE-2021-39226 grafana: Snapshot authentication bypass\n2011171 - diskmaker-manager constantly redeployed by LSO when creating LV\u0027s\n2011293 - Build pod are not pulling images if we are not explicitly giving the registry name with the image\n2011368 - Tooltip in pipeline visualization shows misleading data\n2011386 - [sig-arch] Check if alerts are firing during or after upgrade success --- alert KubePodNotReady fired for 60 seconds with labels\n2011411 - Managed Service\u0027s Cluster overview page contains link to missing Storage dashboards\n2011443 - Cypress tests assuming Admin Perspective could fail on shared/reference cluster\n2011513 - Kubelet rejects pods that use resources that should be freed by completed pods\n2011668 - Machine stuck in deleting phase in VMware \"reconciler failed to Delete machine\"\n2011693 - (release-4.10) \"insightsclient_request_recvreport_total\" metric is always incremented\n2011698 - After upgrading cluster to 4.8 the kube-state-metrics service doesn\u0027t export namespace labels anymore\n2011733 - Repository README points to broken documentarion link\n2011753 - Ironic resumes clean before raid configuration job is actually completed\n2011809 - The nodes page in the openshift console doesn\u0027t work. You just get a blank page\n2011822 - Obfuscation doesn\u0027t work at clusters with OVN\n2011882 - SRO helm charts not synced with templates\n2011893 - Validation: BMC driver ipmi is not supported for secure UEFI boot\n2011896 - [4.10] ClusterVersion Upgradeable=False MultipleReasons should include all messages\n2011903 - vsphere-problem-detector: session leak\n2011927 - OLM should allow users to specify a proxy for GRPC connections\n2011956 - [tracker] Kubelet rejects pods that use resources that should be freed by completed pods\n2011960 - [tracker] Storage operator is not available after reboot cluster instances\n2011971 - ICNI2 pods are stuck in ContainerCreating state\n2011972 - Ingress operator not creating wildcard route for hypershift clusters\n2011977 - SRO bundle references non-existent image\n2012069 - Refactoring Status controller\n2012177 - [OCP 4.9 + OCS 4.8.3] Overview tab is missing under Storage after successful deployment on UI\n2012228 - ibmcloud: credentialsrequests invalid for machine-api-operator: resource-group\n2012233 - [IBMCLOUD] IPI: \"Exceeded limit of remote rules per security group (the limit is 5 remote rules per security group)\"\n2012235 - [IBMCLOUD] IPI: IBM cloud provider requires ResourceGroupName in cloudproviderconfig\n2012317 - Dynamic Plugins: ListPageCreateDropdown items cut off\n2012407 - [e2e][automation] improve vm tab console tests\n2012426 - ThanosSidecarBucketOperationsFailed/ThanosSidecarUnhealthy alerts don\u0027t have namespace label\n2012562 - migration condition is not detected in list view\n2012770 - when using expression metric openshift_apps_deploymentconfigs_last_failed_rollout_time namespace label is re-written\n2012780 - The port 50936 used by haproxy is occupied by kube-apiserver\n2012838 - Setting the default maximum container root partition size for Overlay with CRI-O stop working\n2012902 - Neutron Ports assigned to Completed Pods are not reused Edit\n2012915 - kube_persistentvolumeclaim_labels and kube_persistentvolume_labels are missing in OCP 4.8 monitoring stack\n2012971 - Disable operands deletes\n2013034 - Cannot install to openshift-nmstate namespace\n2013127 - OperatorHub links could not be opened in a new tabs (sharing and open a deep link works fine)\n2013199 - post reboot of node SRIOV policy taking huge time\n2013203 - UI breaks when trying to create block pool before storage cluster/system creation\n2013222 - Full breakage for nightly payload promotion\n2013273 - Nil pointer exception when phc2sys options are missing\n2013321 - TuneD: high CPU utilization of the TuneD daemon. \n2013416 - Multiple assets emit different content to the same filename\n2013431 - Application selector dropdown has incorrect font-size and positioning\n2013528 - mapi_current_pending_csr is always set to 1 on OpenShift Container Platform 4.8\n2013545 - Service binding created outside topology is not visible\n2013599 - Scorecard support storage is not included in ocp4.9\n2013632 - Correction/Changes in Quick Start Guides for ODF 4.9 (Install ODF guide)\n2013646 - fsync controller will show false positive if gaps in metrics are observed. \n2013710 - ZTP Operator subscriptions for 4.9 release branch should point to 4.9 by default\n2013751 - Service details page is showing wrong in-cluster hostname\n2013787 - There are two tittle \u0027Network Attachment Definition Details\u0027 on NAD details page\n2013871 - Resource table headings are not aligned with their column data\n2013895 - Cannot enable accelerated network via MachineSets on Azure\n2013920 - \"--collector.filesystem.ignored-mount-points is DEPRECATED and will be removed in 2.0.0, use --collector.filesystem.mount-points-exclude\"\n2013930 - Create Buttons enabled for Bucket Class, Backingstore and Namespace Store in the absence of Storagesystem(or MCG)\n2013969 - oVIrt CSI driver fails on creating PVCs on hosted engine storage domain\n2013990 - Observe dashboard crashs on reload when perspective has changed (in another tab)\n2013996 - Project detail page: Action \"Delete Project\" does nothing for the default project\n2014071 - Payload imagestream new tags not properly updated during cluster upgrade\n2014153 - SRIOV exclusive pooling\n2014202 - [OCP-4.8.10] OVN-Kubernetes: service IP is not responding when egressIP set to the namespace\n2014238 - AWS console test is failing on importing duplicate YAML definitions\n2014245 - Several aria-labels, external links, and labels aren\u0027t internationalized\n2014248 - Several files aren\u0027t internationalized\n2014352 - Could not filter out machine by using node name on machines page\n2014464 - Unexpected spacing/padding below navigation groups in developer perspective\n2014471 - Helm Release notes tab is not automatically open after installing a chart for other languages\n2014486 - Integration Tests: OLM single namespace operator tests failing\n2014488 - Custom operator cannot change orders of condition tables\n2014497 - Regex slows down different forms and creates too much recursion errors in the log\n2014538 - Kuryr controller crash looping on self._get_vip_port(loadbalancer).id \u0027NoneType\u0027 object has no attribute \u0027id\u0027\n2014614 - Metrics scraping requests should be assigned to exempt priority level\n2014710 - TestIngressStatus test is broken on Azure\n2014954 - The prometheus-k8s-{0,1} pods are CrashLoopBackoff repeatedly\n2014995 - oc adm must-gather cannot gather audit logs with \u0027None\u0027 audit profile\n2015115 - [RFE] PCI passthrough\n2015133 - [IBMCLOUD] ServiceID API key credentials seems to be insufficient for ccoctl \u0027--resource-group-name\u0027 parameter\n2015154 - Support ports defined networks and primarySubnet\n2015274 - Yarn dev fails after updates to dynamic plugin JSON schema logic\n2015337 - 4.9.0 GA MetalLB operator image references need to be adjusted to match production\n2015386 - Possibility to add labels to the built-in OCP alerts\n2015395 - Table head on Affinity Rules modal is not fully expanded\n2015416 - CI implementation for Topology plugin\n2015418 - Project Filesystem query returns No datapoints found\n2015420 - No vm resource in project view\u0027s inventory\n2015422 - No conflict checking on snapshot name\n2015472 - Form and YAML view switch button should have distinguishable status\n2015481 - [4.10] sriov-network-operator daemon pods are failing to start\n2015493 - Cloud Controller Manager Operator does not respect \u0027additionalTrustBundle\u0027 setting\n2015496 - Storage - PersistentVolumes : Claim colum value \u0027No Claim\u0027 in English\n2015498 - [UI] Add capacity when not applicable (for MCG only deployment and External mode cluster) fails to pass any info. to user and tries to just load a blank screen on \u0027Add Capacity\u0027 button click\n2015506 - Home - Search - Resources - APIRequestCount : hard to select an item from ellipsis menu\n2015515 - Kubelet checks all providers even if one is configured: NoCredentialProviders: no valid providers in chain. \n2015535 - Administration - ResourceQuotas - ResourceQuota details: Inside Pie chart \u0027x% used\u0027 is in English\n2015549 - Observe - Metrics: Column heading and pagination text is in English\n2015557 - Workloads - DeploymentConfigs : Error message is in English\n2015568 - Compute - Nodes : CPU column\u0027s values are in English\n2015635 - Storage operator fails causing installation to fail on ASH\n2015660 - \"Finishing boot source customization\" screen should not use term \"patched\"\n2015793 - [hypershift] The collect-profiles job\u0027s pods should run on the control-plane node\n2015806 - Metrics view in Deployment reports \"Forbidden\" when not cluster-admin\n2015819 - Conmon sandbox processes run on non-reserved CPUs with workload partitioning\n2015837 - OS_CLOUD overwrites install-config\u0027s platform.openstack.cloud\n2015950 - update from 4.7.22 to 4.8.11 is failing due to large amount of secrets to watch\n2015952 - RH CodeReady Workspaces Operator in e2e testing will soon fail\n2016004 - [RFE] RHCOS: help determining whether a user-provided image was already booted (Ignition provisioning already performed)\n2016008 - [4.10] Bootimage bump tracker\n2016052 - No e2e CI presubmit configured for release component azure-file-csi-driver\n2016053 - No e2e CI presubmit configured for release component azure-file-csi-driver-operator\n2016054 - No e2e CI presubmit configured for release component cluster-autoscaler\n2016055 - No e2e CI presubmit configured for release component console\n2016058 - openshift-sync does not synchronise in \"ose-jenkins:v4.8\"\n2016064 - No e2e CI presubmit configured for release component ibm-cloud-controller-manager\n2016065 - No e2e CI presubmit configured for release component ibmcloud-machine-controllers\n2016175 - Pods get stuck in ContainerCreating state when attaching volumes fails on SNO clusters. \n2016179 - Add Sprint 208 translations\n2016228 - Collect Profiles pprof secret is hardcoded to openshift-operator-lifecycle-manager\n2016235 - should update to 7.5.11 for grafana resources version label\n2016296 - Openshift virtualization : Create Windows Server 2019 VM using template : Fails\n2016334 - shiftstack: SRIOV nic reported as not supported\n2016352 - Some pods start before CA resources are present\n2016367 - Empty task box is getting created for a pipeline without finally task\n2016435 - Duplicate AlertmanagerClusterFailedToSendAlerts alerts\n2016438 - Feature flag gating is missing in few extensions contributed via knative plugin\n2016442 - OCPonRHV: pvc should be in Bound state and without error when choosing default sc\n2016446 - [OVN-Kubernetes] Egress Networkpolicy is failing Intermittently for statefulsets\n2016453 - Complete i18n for GaugeChart defaults\n2016479 - iface-id-ver is not getting updated for existing lsp\n2016925 - Dashboards with All filter, change to a specific value and change back to All, data will disappear\n2016951 - dynamic actions list is not disabling \"open console\" for stopped vms\n2016955 - m5.large instance type for bootstrap node is hardcoded causing deployments to fail if instance type is not available\n2016988 - NTO does not set io_timeout and max_retries for AWS Nitro instances\n2017016 - [REF] Virtualization menu\n2017036 - [sig-network-edge][Feature:Idling] Unidling should handle many TCP connections fails in periodic-ci-openshift-release-master-ci-4.9-e2e-openstack-ovn\n2017050 - Dynamic Plugins: Shared modules loaded multiple times, breaking use of PatternFly\n2017130 - t is not a function error navigating to details page\n2017141 - Project dropdown has a dynamic inline width added which can cause min-width issue\n2017244 - ovirt csi operator static files creation is in the wrong order\n2017276 - [4.10] Volume mounts not created with the correct security context\n2017327 - When run opm index prune failed with error removing operator package cic-operator FOREIGN KEY constraint failed. \n2017427 - NTO does not restart TuneD daemon when profile application is taking too long\n2017535 - Broken Argo CD link image on GitOps Details Page\n2017547 - Siteconfig application sync fails with The AgentClusterInstall is invalid: spec.provisionRequirements.controlPlaneAgents: Required value when updating images references\n2017564 - On-prem prepender dispatcher script overwrites DNS search settings\n2017565 - CCMO does not handle additionalTrustBundle on Azure Stack\n2017566 - MetalLB: Web Console -Create Address pool form shows address pool name twice\n2017606 - [e2e][automation] add test to verify send key for VNC console\n2017650 - [OVN]EgressFirewall cannot be applied correctly if cluster has windows nodes\n2017656 - VM IP address is \"undefined\" under VM details -\u003e ssh field\n2017663 - SSH password authentication is disabled when public key is not supplied\n2017680 - [gcp] Couldn\u2019t enable support for instances with GPUs on GCP\n2017732 - [KMS] Prevent creation of encryption enabled storageclass without KMS connection set\n2017752 - (release-4.10) obfuscate identity provider attributes in collected authentication.operator.openshift.io resource\n2017756 - overlaySize setting on containerruntimeconfig is ignored due to cri-o defaults\n2017761 - [e2e][automation] dummy bug for 4.9 test dependency\n2017872 - Add Sprint 209 translations\n2017874 - The installer is incorrectly checking the quota for X instances instead of G and VT instances\n2017879 - Add Chinese translation for \"alternate\"\n2017882 - multus: add handling of pod UIDs passed from runtime\n2017909 - [ICNI 2.0] ovnkube-masters stop processing add/del events for pods\n2018042 - HorizontalPodAutoscaler CPU averageValue did not show up in HPA metrics GUI\n2018093 - Managed cluster should ensure control plane pods do not run in best-effort QoS\n2018094 - the tooltip length is limited\n2018152 - CNI pod is not restarted when It cannot start servers due to ports being used\n2018208 - e2e-metal-ipi-ovn-ipv6 are failing 75% of the time\n2018234 - user settings are saved in local storage instead of on cluster\n2018264 - Delete Export button doesn\u0027t work in topology sidebar (general issue with unknown CSV?)\n2018272 - Deployment managed by link and topology sidebar links to invalid resource page (at least for Exports)\n2018275 - Topology graph doesn\u0027t show context menu for Export CSV\n2018279 - Edit and Delete confirmation modals for managed resource should close when the managed resource is clicked\n2018380 - Migrate docs links to access.redhat.com\n2018413 - Error: context deadline exceeded, OCP 4.8.9\n2018428 - PVC is deleted along with VM even with \"Delete Disks\" unchecked\n2018445 - [e2e][automation] enhance tests for downstream\n2018446 - [e2e][automation] move tests to different level\n2018449 - [e2e][automation] add test about create/delete network attachment definition\n2018490 - [4.10] Image provisioning fails with file name too long\n2018495 - Fix typo in internationalization README\n2018542 - Kernel upgrade does not reconcile DaemonSet\n2018880 - Get \u0027No datapoints found.\u0027 when query metrics about alert rule KubeCPUQuotaOvercommit and KubeMemoryQuotaOvercommit\n2018884 - QE - Adapt crw-basic feature file to OCP 4.9/4.10 changes\n2018935 - go.sum not updated, that ART extracts version string from, WAS: Missing backport from 4.9 for Kube bump PR#950\n2018965 - e2e-metal-ipi-upgrade is permafailing in 4.10\n2018985 - The rootdisk size is 15Gi of windows VM in customize wizard\n2019001 - AWS: Operator degraded (CredentialsFailing): 1 of 6 credentials requests are failing to sync. \n2019096 - Update SRO leader election timeout to support SNO\n2019129 - SRO in operator hub points to wrong repo for README\n2019181 - Performance profile does not apply\n2019198 - ptp offset metrics are not named according to the log output\n2019219 - [IBMCLOUD]: cloud-provider-ibm missing IAM permissions in CCCMO CredentialRequest\n2019284 - Stop action should not in the action list while VMI is not running\n2019346 - zombie processes accumulation and Argument list too long\n2019360 - [RFE] Virtualization Overview page\n2019452 - Logger object in LSO appends to existing logger recursively\n2019591 - Operator install modal body that scrolls has incorrect padding causing shadow position to be incorrect\n2019634 - Pause and migration is enabled in action list for a user who has view only permission\n2019636 - Actions in VM tabs should be disabled when user has view only permission\n2019639 - \"Take snapshot\" should be disabled while VM image is still been importing\n2019645 - Create button is not removed on \"Virtual Machines\" page for view only user\n2019646 - Permission error should pop-up immediately while clicking \"Create VM\" button on template page for view only user\n2019647 - \"Remove favorite\" and \"Create new Template\" should be disabled in template action list for view only user\n2019717 - cant delete VM with un-owned pvc attached\n2019722 - The shared-resource-csi-driver-node pod runs as \u201cBestEffort\u201d qosClass\n2019739 - The shared-resource-csi-driver-node uses imagePullPolicy as \"Always\"\n2019744 - [RFE] Suggest users to download newest RHEL 8 version\n2019809 - [OVN][Upgrade] After upgrade to 4.7.34 ovnkube-master pods are in CrashLoopBackOff/ContainerCreating and other multiple issues at OVS/OVN level\n2019827 - Display issue with top-level menu items running demo plugin\n2019832 - 4.10 Nightlies blocked: Failed to upgrade authentication, operator was degraded\n2019886 - Kuryr unable to finish ports recovery upon controller restart\n2019948 - [RFE] Restructring Virtualization links\n2019972 - The Nodes section doesn\u0027t display the csr of the nodes that are trying to join the cluster\n2019977 - Installer doesn\u0027t validate region causing binary to hang with a 60 minute timeout\n2019986 - Dynamic demo plugin fails to build\n2019992 - instance:node_memory_utilisation:ratio metric is incorrect\n2020001 - Update dockerfile for demo dynamic plugin to reflect dir change\n2020003 - MCD does not regard \"dangling\" symlinks as a files, attempts to write through them on next backup, resulting in \"not writing through dangling symlink\" error and degradation. \n2020107 - cluster-version-operator: remove runlevel from CVO namespace\n2020153 - Creation of Windows high performance VM fails\n2020216 - installer: Azure storage container blob where is stored bootstrap.ign file shouldn\u0027t be public\n2020250 - Replacing deprecated ioutil\n2020257 - Dynamic plugin with multiple webpack compilation passes may fail to build\n2020275 - ClusterOperators link in console returns blank page during upgrades\n2020377 - permissions error while using tcpdump option with must-gather\n2020489 - coredns_dns metrics don\u0027t include the custom zone metrics data due to CoreDNS prometheus plugin is not defined\n2020498 - \"Show PromQL\" button is disabled\n2020625 - [AUTH-52] User fails to login from web console with keycloak OpenID IDP after enable group membership sync feature\n2020638 - [4.7] CI conformance test failures related to CustomResourcePublishOpenAPI\n2020664 - DOWN subports are not cleaned up\n2020904 - When trying to create a connection from the Developer view between VMs, it fails\n2021016 - \u0027Prometheus Stats\u0027 of dashboard \u0027Prometheus Overview\u0027 miss data on console compared with Grafana\n2021017 - 404 page not found error on knative eventing page\n2021031 - QE - Fix the topology CI scripts\n2021048 - [RFE] Added MAC Spoof check\n2021053 - Metallb operator presented as community operator\n2021067 - Extensive number of requests from storage version operator in cluster\n2021081 - Missing PolicyGenTemplate for configuring Local Storage Operator LocalVolumes\n2021135 - [azure-file-csi-driver] \"make unit-test\" returns non-zero code, but tests pass\n2021141 - Cluster should allow a fast rollout of kube-apiserver is failing on single node\n2021151 - Sometimes the DU node does not get the performance profile configuration applied and MachineConfigPool stays stuck in Updating\n2021152 - imagePullPolicy is \"Always\" for ptp operator images\n2021191 - Project admins should be able to list available network attachment defintions\n2021205 - Invalid URL in git import form causes validation to not happen on URL change\n2021322 - cluster-api-provider-azure should populate purchase plan information\n2021337 - Dynamic Plugins: ResourceLink doesn\u0027t render when passed a groupVersionKind\n2021364 - Installer requires invalid AWS permission s3:GetBucketReplication\n2021400 - Bump documentationBaseURL to 4.10\n2021405 - [e2e][automation] VM creation wizard Cloud Init editor\n2021433 - \"[sig-builds][Feature:Builds][pullsearch] docker build where the registry is not specified\" test fail permanently on disconnected\n2021466 - [e2e][automation] Windows guest tool mount\n2021544 - OCP 4.6.44 - Ingress VIP assigned as secondary IP in ovs-if-br-ex and added to resolv.conf as nameserver\n2021551 - Build is not recognizing the USER group from an s2i image\n2021607 - Unable to run openshift-install with a vcenter hostname that begins with a numeric character\n2021629 - api request counts for current hour are incorrect\n2021632 - [UI] Clicking on odf-operator breadcrumb from StorageCluster details page displays empty page\n2021693 - Modals assigned modal-lg class are no longer the correct width\n2021724 - Observe \u003e Dashboards: Graph lines are not visible when obscured by other lines\n2021731 - CCO occasionally down, reporting networksecurity.googleapis.com API as disabled\n2021936 - Kubelet version in RPMs should be using Dockerfile label instead of git tags\n2022050 - [BM][IPI] Failed during bootstrap - unable to read client-key /var/lib/kubelet/pki/kubelet-client-current.pem\n2022053 - dpdk application with vhost-net is not able to start\n2022114 - Console logging every proxy request\n2022144 - 1 of 3 ovnkube-master pods stuck in clbo after ipi bm deployment - dualstack (Intermittent)\n2022251 - wait interval in case of a failed upload due to 403 is unnecessarily long\n2022399 - MON_DISK_LOW troubleshooting guide link when clicked, gives 404 error . \n2022447 - ServiceAccount in manifests conflicts with OLM\n2022502 - Patternfly tables with a checkbox column are not displaying correctly because of conflicting css rules. \n2022509 - getOverrideForManifest does not check manifest.GVK.Group\n2022536 - WebScale: duplicate ecmp next hop error caused by multiple of the same gateway IPs in ovnkube cache\n2022612 - no namespace field for \"Kubernetes / Compute Resources / Namespace (Pods)\" admin console dashboard\n2022627 - Machine object not picking up external FIP added to an openstack vm\n2022646 - configure-ovs.sh failure - Error: unknown connection \u0027WARN:\u0027\n2022707 - Observe / monitoring dashboard shows forbidden errors on Dev Sandbox\n2022801 - Add Sprint 210 translations\n2022811 - Fix kubelet log rotation file handle leak\n2022812 - [SCALE] ovn-kube service controller executes unnecessary load balancer operations\n2022824 - Large number of sessions created by vmware-vsphere-csi-driver-operator during e2e tests\n2022880 - Pipeline renders with minor visual artifact with certain task dependencies\n2022886 - Incorrect URL in operator description\n2023042 - CRI-O filters custom runtime allowed annotation when both custom workload and custom runtime sections specified under the config\n2023060 - [e2e][automation] Windows VM with CDROM migration\n2023077 - [e2e][automation] Home Overview Virtualization status\n2023090 - [e2e][automation] Examples of Import URL for VM templates\n2023102 - [e2e][automation] Cloudinit disk of VM from custom template\n2023216 - ACL for a deleted egressfirewall still present on node join switch\n2023228 - Remove Tech preview badge on Trigger components 1.6 OSP on OCP 4.9\n2023238 - [sig-devex][Feature:ImageEcosystem][python][Slow] hot deploy for openshift python image Django example should work with hot deploy\n2023342 - SCC admission should take ephemeralContainers into account\n2023356 - Devfiles can\u0027t be loaded in Safari on macOS (403 - Forbidden)\n2023434 - Update Azure Machine Spec API to accept Marketplace Images\n2023500 - Latency experienced while waiting for volumes to attach to node\n2023522 - can\u0027t remove package from index: database is locked\n2023560 - \"Network Attachment Definitions\" has no project field on the top in the list view\n2023592 - [e2e][automation] add mac spoof check for nad\n2023604 - ACL violation when deleting a provisioning-configuration resource\n2023607 - console returns blank page when normal user without any projects visit Installed Operators page\n2023638 - Downgrade support level for extended control plane integration to Dev Preview\n2023657 - inconsistent behaviours of adding ssh key on rhel node between 4.9 and 4.10\n2023675 - Changing CNV Namespace\n2023779 - Fix Patch 104847 in 4.9\n2023781 - initial hardware devices is not loading in wizard\n2023832 - CCO updates lastTransitionTime for non-Status changes\n2023839 - Bump recommended FCOS to 34.20211031.3.0\n2023865 - Console css overrides prevent dynamic plug-in PatternFly tables from displaying correctly\n2023950 - make test-e2e-operator on kubernetes-nmstate results in failure to pull image from \"registry:5000\" repository\n2023985 - [4.10] OVN idle service cannot be accessed after upgrade from 4.8\n2024055 - External DNS added extra prefix for the TXT record\n2024108 - Occasionally node remains in SchedulingDisabled state even after update has been completed sucessfully\n2024190 - e2e-metal UPI is permafailing with inability to find rhcos.json\n2024199 - 400 Bad Request error for some queries for the non admin user\n2024220 - Cluster monitoring checkbox flickers when installing Operator in all-namespace mode\n2024262 - Sample catalog is not displayed when one API call to the backend fails\n2024309 - cluster-etcd-operator: defrag controller needs to provide proper observability\n2024316 - modal about support displays wrong annotation\n2024328 - [oVirt / RHV] PV disks are lost when machine deleted while node is disconnected\n2024399 - Extra space is in the translated text of \"Add/Remove alternate service\" on Create Route page\n2024448 - When ssh_authorized_keys is empty in form view it should not appear in yaml view\n2024493 - Observe \u003e Alerting \u003e Alerting rules page throws error trying to destructure undefined\n2024515 - test-blocker: Ceph-storage-plugin tests failing\n2024535 - hotplug disk missing OwnerReference\n2024537 - WINDOWS_IMAGE_LINK does not refer to windows cloud image\n2024547 - Detail page is breaking for namespace store , backing store and bucket class. \n2024551 - KMS resources not getting created for IBM FlashSystem storage\n2024586 - Special Resource Operator(SRO) - Empty image in BuildConfig when using RT kernel\n2024613 - pod-identity-webhook starts without tls\n2024617 - vSphere CSI tests constantly failing with Rollout of the monitoring stack failed and is degraded\n2024665 - Bindable services are not shown on topology\n2024731 - linuxptp container: unnecessary checking of interfaces\n2024750 - i18n some remaining OLM items\n2024804 - gcp-pd-csi-driver does not use trusted-ca-bundle when cluster proxy configured\n2024826 - [RHOS/IPI] Masters are not joining a clusters when installing on OpenStack\n2024841 - test Keycloak with latest tag\n2024859 - Not able to deploy an existing image from private image registry using developer console\n2024880 - Egress IP breaks when network policies are applied\n2024900 - Operator upgrade kube-apiserver\n2024932 - console throws \"Unauthorized\" error after logging out\n2024933 - openshift-sync plugin does not sync existing secrets/configMaps on start up\n2025093 - Installer does not honour diskformat specified in storage policy and defaults to zeroedthick\n2025230 - ClusterAutoscalerUnschedulablePods should not be a warning\n2025266 - CreateResource route has exact prop which need to be removed\n2025301 - [e2e][automation] VM actions availability in different VM states\n2025304 - overwrite storage section of the DV spec instead of the pvc section\n2025431 - [RFE]Provide specific windows source link\n2025458 - [IPI-AWS] cluster-baremetal-operator pod in a crashloop state after patching from 4.7.21 to 4.7.36\n2025464 - [aws] openshift-install gather bootstrap collects logs for bootstrap and only one master node\n2025467 - [OVN-K][ETP=local] Host to service backed by ovn pods doesn\u0027t work for ExternalTrafficPolicy=local\n2025481 - Update VM Snapshots UI\n2025488 - [DOCS] Update the doc for nmstate operator installation\n2025592 - ODC 4.9 supports invalid devfiles only\n2025765 - It should not try to load from storageProfile after unchecking\"Apply optimized StorageProfile settings\"\n2025767 - VMs orphaned during machineset scaleup\n2025770 - [e2e] non-priv seems looking for v2v-vmware configMap in ns \"kubevirt-hyperconverged\" while using customize wizard\n2025788 - [IPI on azure]Pre-check on IPI Azure, should check VM Size\u2019s vCPUsAvailable instead of vCPUs for the sku. \n2025821 - Make \"Network Attachment Definitions\" available to regular user\n2025823 - The console nav bar ignores plugin separator in existing sections\n2025830 - CentOS capitalizaion is wrong\n2025837 - Warn users that the RHEL URL expire\n2025884 - External CCM deploys openstack-cloud-controller-manager from quay.io/openshift/origin-*\n2025903 - [UI] RoleBindings tab doesn\u0027t show correct rolebindings\n2026104 - [sig-imageregistry][Feature:ImageAppend] Image append should create images by appending them [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2026178 - OpenShift Alerting Rules Style-Guide Compliance\n2026209 - Updation of task is getting failed (tekton hub integration)\n2026223 - Internal error occurred: failed calling webhook \"ptpconfigvalidationwebhook.openshift.io\"\n2026321 - [UPI on Azure] Shall we remove allowedValue about VMSize in ARM templates\n2026343 - [upgrade from 4.5 to 4.6] .status.connectionState.address of catsrc community-operators is not correct\n2026352 - Kube-Scheduler revision-pruner fail during install of new cluster\n2026374 - aws-pod-identity-webhook go.mod version out of sync with build environment\n2026383 - Error when rendering custom Grafana dashboard through ConfigMap\n2026387 - node tuning operator metrics endpoint serving old certificates after certificate rotation\n2026396 - Cachito Issues: sriov-network-operator Image build failure\n2026488 - openshift-controller-manager - delete event is repeating pathologically\n2026489 - ThanosRuleRuleEvaluationLatencyHigh alerts when a big quantity of alerts defined. \n2026560 - Cluster-version operator does not remove unrecognized volume mounts\n2026699 - fixed a bug with missing metadata\n2026813 - add Mellanox CX-6 Lx DeviceID 101f NIC support in SR-IOV Operator\n2026898 - Description/details are missing for Local Storage Operator\n2027132 - Use the specific icon for Fedora and CentOS template\n2027238 - \"Node Exporter / USE Method / Cluster\" CPU utilization graph shows incorrect legend\n2027272 - KubeMemoryOvercommit alert should be human readable\n2027281 - [Azure] External-DNS cannot find the private DNS zone in the resource group\n2027288 - Devfile samples can\u0027t be loaded after fixing it on Safari (redirect caching issue)\n2027299 - The status of checkbox component is not revealed correctly in code\n2027311 - K8s watch hooks do not work when fetching core resources\n2027342 - Alert ClusterVersionOperatorDown is firing on OpenShift Container Platform after ca certificate rotation\n2027363 - The azure-file-csi-driver and azure-file-csi-driver-operator don\u0027t use the downstream images\n2027387 - [IBMCLOUD] Terraform ibmcloud-provider buffers entirely the qcow2 image causing spikes of 5GB of RAM during installation\n2027498 - [IBMCloud] SG Name character length limitation\n2027501 - [4.10] Bootimage bump tracker\n2027524 - Delete Application doesn\u0027t delete Channels or Brokers\n2027563 - e2e/add-flow-ci.feature fix accessibility violations\n2027585 - CVO crashes when changing spec.upstream to a cincinnati graph which includes invalid conditional edges\n2027629 - Gather ValidatingWebhookConfiguration and MutatingWebhookConfiguration resource definitions\n2027685 - openshift-cluster-csi-drivers pods crashing on PSI\n2027745 - default samplesRegistry prevents the creation of imagestreams when registrySources.allowedRegistries is enforced\n2027824 - ovnkube-master CrashLoopBackoff: panic: Expected slice or struct but got string\n2027917 - No settings in hostfirmwaresettings and schema objects for masters\n2027927 - sandbox creation fails due to obsolete option in /etc/containers/storage.conf\n2027982 - nncp stucked at ConfigurationProgressing\n2028019 - Max pending serving CSRs allowed in cluster machine approver is not right for UPI clusters\n2028024 - After deleting a SpecialResource, the node is still tagged although the driver is removed\n2028030 - Panic detected in cluster-image-registry-operator pod\n2028042 - Desktop viewer for Windows VM shows \"no Service for the RDP (Remote Desktop Protocol) can be found\"\n2028054 - Cloud controller manager operator can\u0027t get leader lease when upgrading from 4.8 up to 4.9\n2028106 - [RFE] Use dynamic plugin actions for kubevirt plugin\n2028141 - Console tests doesn\u0027t pass on Node.js 15 and 16\n2028160 - Remove i18nKey in network-policy-peer-selectors.tsx\n2028162 - Add Sprint 210 translations\n2028170 - Remove leading and trailing whitespace\n2028174 - Add Sprint 210 part 2 translations\n2028187 - Console build doesn\u0027t pass on Node.js 16 because node-sass doesn\u0027t support it\n2028217 - Cluster-version operator does not default Deployment replicas to one\n2028240 - Multiple CatalogSources causing higher CPU use than necessary\n2028268 - Password parameters are listed in FirmwareSchema in spite that cannot and shouldn\u0027t be set in HostFirmwareSettings\n2028325 - disableDrain should be set automatically on SNO\n2028484 - AWS EBS CSI driver\u0027s livenessprobe does not respect operator\u0027s loglevel\n2028531 - Missing netFilter to the list of parameters when platform is OpenStack\n2028610 - Installer doesn\u0027t retry on GCP rate limiting\n2028685 - LSO repeatedly reports errors while diskmaker-discovery pod is starting\n2028695 - destroy cluster does not prune bootstrap instance profile\n2028731 - The containerruntimeconfig controller has wrong assumption regarding the number of containerruntimeconfigs\n2028802 - CRI-O panic due to invalid memory address or nil pointer dereference\n2028816 - VLAN IDs not released on failures\n2028881 - Override not working for the PerformanceProfile template\n2028885 - Console should show an error context if it logs an error object\n2028949 - Masthead dropdown item hover text color is incorrect\n2028963 - Whereabouts should reconcile stranded IP addresses\n2029034 - enabling ExternalCloudProvider leads to inoperative cluster\n2029178 - Create VM with wizard - page is not displayed\n2029181 - Missing CR from PGT\n2029273 - wizard is not able to use if project field is \"All Projects\"\n2029369 - Cypress tests github rate limit errors\n2029371 - patch pipeline--worker nodes unexpectedly reboot during scale out\n2029394 - missing empty text for hardware devices at wizard review\n2029414 - Alibaba Disk snapshots with XFS filesystem cannot be used\n2029416 - Alibaba Disk CSI driver does not use credentials provided by CCO / ccoctl\n2029521 - EFS CSI driver cannot delete volumes under load\n2029570 - Azure Stack Hub: CSI Driver does not use user-ca-bundle\n2029579 - Clicking on an Application which has a Helm Release in it causes an error\n2029644 - New resource FirmwareSchema - reset_required exists for Dell machines and doesn\u0027t for HPE\n2029645 - Sync upstream 1.15.0 downstream\n2029671 - VM action \"pause\" and \"clone\" should be disabled while VM disk is still being importing\n2029742 - [ovn] Stale lr-policy-list and snat rules left for egressip\n2029750 - cvo keep restart due to it fail to get feature gate value during the initial start stage\n2029785 - CVO panic when an edge is included in both edges and conditionaledges\n2029843 - Downstream ztp-site-generate-rhel8 4.10 container image missing content(/home/ztp)\n2030003 - HFS CRD: Attempt to set Integer parameter to not-numeric string value - no error\n2030029 - [4.10][goroutine]Namespace stuck terminating: Failed to delete all resource types, 1 remaining: unexpected items still remain in namespace\n2030228 - Fix StorageSpec resources field to use correct API\n2030229 - Mirroring status card reflect wrong data\n2030240 - Hide overview page for non-privileged user\n2030305 - Export App job do not completes\n2030347 - kube-state-metrics exposes metrics about resource annotations\n2030364 - Shared resource CSI driver monitoring is not setup correctly\n2030488 - Numerous Azure CI jobs are Failing with Partially Rendered machinesets\n2030534 - Node selector/tolerations rules are evaluated too early\n2030539 - Prometheus is not highly available\n2030556 - Don\u0027t display Description or Message fields for alerting rules if those annotations are missing\n2030568 - Operator installation fails to parse operatorframework.io/initialization-resource annotation\n2030574 - console service uses older \"service.alpha.openshift.io\" for the service serving certificates. \n2030677 - BOND CNI: There is no option to configure MTU on a Bond interface\n2030692 - NPE in PipelineJobListener.upsertWorkflowJob\n2030801 - CVE-2021-44716 golang: net/http: limit growth of header canonicalization cache\n2030806 - CVE-2021-44717 golang: syscall: don\u0027t close fd 0 on ForkExec error\n2030847 - PerformanceProfile API version should be v2\n2030961 - Customizing the OAuth server URL does not apply to upgraded cluster\n2031006 - Application name input field is not autofocused when user selects \"Create application\"\n2031012 - Services of type loadbalancer do not work if the traffic reaches the node from an interface different from br-ex\n2031040 - Error screen when open topology sidebar for a Serverless / knative service which couldn\u0027t be started\n2031049 - [vsphere upi] pod machine-config-operator cannot be started due to panic issue\n2031057 - Topology sidebar for Knative services shows a small pod ring with \"0 undefined\" as tooltip\n2031060 - Failing CSR Unit test due to expired test certificate\n2031085 - ovs-vswitchd running more threads than expected\n2031141 - Some pods not able to reach k8s api svc IP 198.223.0.1\n2031228 - CVE-2021-43813 grafana: directory traversal vulnerability\n2031502 - [RFE] New common templates crash the ui\n2031685 - Duplicated forward upstreams should be removed from the dns operator\n2031699 - The displayed ipv6 address of a dns upstream should be case sensitive\n2031797 - [RFE] Order and text of Boot source type input are wrong\n2031826 - CI tests needed to confirm driver-toolkit image contents\n2031831 - OCP Console - Global CSS overrides affecting dynamic plugins\n2031839 - Starting from Go 1.17 invalid certificates will render a cluster dysfunctional\n2031858 - GCP beta-level Role (was: CCO occasionally down, reporting networksecurity.googleapis.com API as disabled)\n2031875 - [RFE]: Provide online documentation for the SRO CRD (via oc explain)\n2031926 - [ipv6dualstack] After SVC conversion from single stack only to RequireDualStack, cannot curl NodePort from the node itself\n2032006 - openshift-gitops-application-controller-0 failed to schedule with sufficient node allocatable resource\n2032111 - arm64 cluster, create project and deploy the example deployment, pod is CrashLoopBackOff due to the image is built on linux+amd64\n2032141 - open the alertrule link in new tab, got empty page\n2032179 - [PROXY] external dns pod cannot reach to cloud API in the cluster behind a proxy\n2032296 - Cannot create machine with ephemeral disk on Azure\n2032407 - UI will show the default openshift template wizard for HANA template\n2032415 - Templates page - remove \"support level\" badge and add \"support level\" column which should not be hard coded\n2032421 - [RFE] UI integration with automatic updated images\n2032516 - Not able to import git repo with .devfile.yaml\n2032521 - openshift-installer intermittent failure on AWS with \"Error: Provider produced inconsistent result after apply\" when creating the aws_vpc_dhcp_options_association resource\n2032547 - hardware devices table have filter when table is empty\n2032565 - Deploying compressed files with a MachineConfig resource degrades the MachineConfigPool\n2032566 - Cluster-ingress-router does not support Azure Stack\n2032573 - Adopting enforces deploy_kernel/ramdisk which does not work with deploy_iso\n2032589 - DeploymentConfigs ignore resolve-names annotation\n2032732 - Fix styling conflicts due to recent console-wide CSS changes\n2032831 - Knative Services and Revisions are not shown when Service has no ownerReference\n2032851 - Networking is \"not available\" in Virtualization Overview\n2032926 - Machine API components should use K8s 1.23 dependencies\n2032994 - AddressPool IP is not allocated to service external IP wtih aggregationLength 24\n2032998 - Can not achieve 250 pods/node with OVNKubernetes in a multiple worker node cluster\n2033013 - Project dropdown in user preferences page is broken\n2033044 - Unable to change import strategy if devfile is invalid\n2033098 - Conjunction in ProgressiveListFooter.tsx is not translatable\n2033111 - IBM VPC operator library bump removed global CLI args\n2033138 - \"No model registered for Templates\" shows on customize wizard\n2033215 - Flaky CI: crud/other-routes.spec.ts fails sometimes with an cypress ace/a11y AssertionError: 1 accessibility violation was detected\n2033239 - [IPI on Alibabacloud] \u0027openshift-install\u0027 gets the wrong region (\u2018cn-hangzhou\u2019) selected\n2033257 - unable to use configmap for helm charts\n2033271 - [IPI on Alibabacloud] destroying cluster succeeded, but the resource group deletion wasn\u2019t triggered\n2033290 - Product builds for console are failing\n2033382 - MAPO is missing machine annotations\n2033391 - csi-driver-shared-resource-operator sets unused CVO-manifest annotations\n2033403 - Devfile catalog does not show provider information\n2033404 - Cloud event schema is missing source type and resource field is using wrong value\n2033407 - Secure route data is not pre-filled in edit flow form\n2033422 - CNO not allowing LGW conversion from SGW in runtime\n2033434 - Offer darwin/arm64 oc in clidownloads\n2033489 - CCM operator failing on baremetal platform\n2033518 - [aws-efs-csi-driver]Should not accept invalid FSType in sc for AWS EFS driver\n2033524 - [IPI on Alibabacloud] interactive installer cannot list existing base domains\n2033536 - [IPI on Alibabacloud] bootstrap complains invalid value for alibabaCloud.resourceGroupID when updating \"cluster-infrastructure-02-config.yml\" status, which leads to bootstrap failed and all master nodes NotReady\n2033538 - Gather Cost Management Metrics Custom Resource\n2033579 - SRO cannot update the special-resource-lifecycle ConfigMap if the data field is undefined\n2033587 - Flaky CI test project-dashboard.scenario.ts: Resource Quotas Card was not found on project detail page\n2033634 - list-style-type: disc is applied to the modal dropdowns\n2033720 - Update samples in 4.10\n2033728 - Bump OVS to 2.16.0-33\n2033729 - remove runtime request timeout restriction for azure\n2033745 - Cluster-version operator makes upstream update service / Cincinnati requests more frequently than intended\n2033749 - Azure Stack Terraform fails without Local Provider\n2033750 - Local volume should pull multi-arch image for kube-rbac-proxy\n2033751 - Bump kubernetes to 1.23\n2033752 - make verify fails due to missing yaml-patch\n2033784 - set kube-apiserver degraded=true if webhook matches a virtual resource\n2034004 - [e2e][automation] add tests for VM snapshot improvements\n2034068 - [e2e][automation] Enhance tests for 4.10 downstream\n2034087 - [OVN] EgressIP was assigned to the node which is not egress node anymore\n2034097 - [OVN] After edit EgressIP object, the status is not correct\n2034102 - [OVN] Recreate the deleted EgressIP object got InvalidEgressIP warning\n2034129 - blank page returned when clicking \u0027Get started\u0027 button\n2034144 - [OVN AWS] ovn-kube egress IP monitoring cannot detect the failure on ovn-k8s-mp0\n2034153 - CNO does not verify MTU migration for OpenShiftSDN\n2034155 - [OVN-K] [Multiple External Gateways] Per pod SNAT is disabled\n2034170 - Use function.knative.dev for Knative Functions related labels\n2034190 - unable to add new VirtIO disks to VMs\n2034192 - Prometheus fails to insert reporting metrics when the sample limit is met\n2034243 - regular user cant load template list\n2034245 - installing a cluster on aws, gcp always fails with \"Error: Incompatible provider version\"\n2034248 - GPU/Host device modal is too small\n2034257 - regular user `Create VM` missing permissions alert\n2034285 - [sig-api-machinery] API data in etcd should be stored at the correct location and version for all resources [Serial] [Suite:openshift/conformance/serial]\n2034287 - do not block upgrades if we can\u0027t create storageclass in 4.10 in vsphere\n2034300 - Du validator policy is NonCompliant after DU configuration completed\n2034319 - Negation constraint is not validating packages\n2034322 - CNO doesn\u0027t pick up settings required when ExternalControlPlane topology\n2034350 - The CNO should implement the Whereabouts IP reconciliation cron job\n2034362 - update description of disk interface\n2034398 - The Whereabouts IPPools CRD should include the podref field\n2034409 - Default CatalogSources should be pointing to 4.10 index images\n2034410 - Metallb BGP, BFD: prometheus is not scraping the frr metrics\n2034413 - cloud-network-config-controller fails to init with secret \"cloud-credentials\" not found in manual credential mode\n2034460 - Summary: cloud-network-config-controller does not account for different environment\n2034474 - Template\u0027s boot source is \"Unknown source\" before and after set enableCommonBootImageImport to true\n2034477 - [OVN] Multiple EgressIP objects configured, EgressIPs weren\u0027t working properly\n2034493 - Change cluster version operator log level\n2034513 - [OVN] After update one EgressIP in EgressIP object, one internal IP lost from lr-policy-list\n2034527 - IPI deployment fails \u0027timeout reached while inspecting the node\u0027 when provisioning network ipv6\n2034528 - [IBM VPC] volumeBindingMode should be WaitForFirstConsumer\n2034534 - Update ose-machine-api-provider-openstack images to be consistent with ART\n2034537 - Update team\n2034559 - KubeAPIErrorBudgetBurn firing outside recommended latency thresholds\n2034563 - [Azure] create machine with wrong ephemeralStorageLocation value success\n2034577 - Current OVN gateway mode should be reflected on node annotation as well\n2034621 - context menu not popping up for application group\n2034622 - Allow volume expansion by default in vsphere CSI storageclass 4.10\n2034624 - Warn about unsupported CSI driver in vsphere operator\n2034647 - missing volumes list in snapshot modal\n2034648 - Rebase openshift-controller-manager to 1.23\n2034650 - Rebase openshift/builder to 1.23\n2034705 - vSphere: storage e2e tests logging configuration data\n2034743 - EgressIP: assigning the same egress IP to a second EgressIP object after a ovnkube-master restart does not fail. \n2034766 - Special Resource Operator(SRO) - no cert-manager pod created in dual stack environment\n2034785 - ptpconfig with summary_interval cannot be applied\n2034823 - RHEL9 should be starred in template list\n2034838 - An external router can inject routes if no service is added\n2034839 - Jenkins sync plugin does not synchronize ConfigMap having label role=jenkins-agent\n2034879 - Lifecycle hook\u0027s name and owner shouldn\u0027t be allowed to be empty\n2034881 - Cloud providers components should use K8s 1.23 dependencies\n2034884 - ART cannot build the image because it tries to download controller-gen\n2034889 - `oc adm prune deployments` does not work\n2034898 - Regression in recently added Events feature\n2034957 - update openshift-apiserver to kube 1.23.1\n2035015 - ClusterLogForwarding CR remains stuck remediating forever\n2035093 - openshift-cloud-network-config-controller never runs on Hypershift cluster\n2035141 - [RFE] Show GPU/Host devices in template\u0027s details tab\n2035146 - \"kubevirt-plugin~PVC cannot be empty\" shows on add-disk modal while adding existing PVC\n2035167 - [cloud-network-config-controller] unable to deleted cloudprivateipconfig when deleting\n2035199 - IPv6 support in mtu-migration-dispatcher.yaml\n2035239 - e2e-metal-ipi-virtualmedia tests are permanently failing\n2035250 - Peering with ebgp peer over multi-hops doesn\u0027t work\n2035264 - [RFE] Provide a proper message for nonpriv user who not able to add PCI devices\n2035315 - invalid test cases for AWS passthrough mode\n2035318 - Upgrade management workflow needs to allow custom upgrade graph path for disconnected env\n2035321 - Add Sprint 211 translations\n2035326 - [ExternalCloudProvider] installation with additional network on workers fails\n2035328 - Ccoctl does not ignore credentials request manifest marked for deletion\n2035333 - Kuryr orphans ports on 504 errors from Neutron\n2035348 - Fix two grammar issues in kubevirt-plugin.json strings\n2035393 - oc set data --dry-run=server makes persistent changes to configmaps and secrets\n2035409 - OLM E2E test depends on operator package that\u0027s no longer published\n2035439 - SDN Automatic assignment EgressIP on GCP returned node IP adress not egressIP address\n2035453 - [IPI on Alibabacloud] 2 worker machines stuck in Failed phase due to connection to \u0027ecs-cn-hangzhou.aliyuncs.com\u0027 timeout, although the specified region is \u0027us-east-1\u0027\n2035454 - [IPI on Alibabacloud] the OSS bucket created during installation for image registry is not deleted after destroying the cluster\n2035467 - UI: Queried metrics can\u0027t be ordered on Oberve-\u003eMetrics page\n2035494 - [SDN Migration]ovnkube-node pods CrashLoopBackOff after sdn migrated to ovn for RHEL workers\n2035515 - [IBMCLOUD] allowVolumeExpansion should be true in storage class\n2035602 - [e2e][automation] add tests for Virtualization Overview page cards\n2035703 - Roles -\u003e RoleBindings tab doesn\u0027t show RoleBindings correctly\n2035704 - RoleBindings list page filter doesn\u0027t apply\n2035705 - Azure \u0027Destroy cluster\u0027 get stuck when the cluster resource group is already not existing. \n2035757 - [IPI on Alibabacloud] one master node turned NotReady which leads to installation failed\n2035772 - AccessMode and VolumeMode is not reserved for customize wizard\n2035847 - Two dashes in the Cronjob / Job pod name\n2035859 - the output of opm render doesn\u0027t contain olm.constraint which is defined in dependencies.yaml\n2035882 - [BIOS setting values] Create events for all invalid settings in spec\n2035903 - One redundant capi-operator credential requests in \u201coc adm extract --credentials-requests\u201d\n2035910 - [UI] Manual approval options are missing after ODF 4.10 installation starts when Manual Update approval is chosen\n2035927 - Cannot enable HighNodeUtilization scheduler profile\n2035933 - volume mode and access mode are empty in customize wizard review tab\n2035969 - \"ip a \" shows \"Error: Peer netns reference is invalid\" after create test pods\n2035986 - Some pods under kube-scheduler/kube-controller-manager are using the deprecated annotation\n2036006 - [BIOS setting values] Attempt to set Integer parameter results in preparation error\n2036029 - New added cloud-network-config operator doesn\u2019t supported aws sts format credential\n2036096 - [azure-file-csi-driver] there are no e2e tests for NFS backend\n2036113 - cluster scaling new nodes ovs-configuration fails on all new nodes\n2036567 - [csi-driver-nfs] Upstream merge: Bump k8s libraries to 1.23\n2036569 - [cloud-provider-openstack] Upstream merge: Bump k8s libraries to 1.23\n2036577 - OCP 4.10 nightly builds from 4.10.0-0.nightly-s390x-2021-12-18-034912 to 4.10.0-0.nightly-s390x-2022-01-11-233015 fail to upgrade from OCP 4.9.11 and 4.9.12 for network type OVNKubernetes for zVM hypervisor environments\n2036622 - sdn-controller crashes when restarted while a previous egress IP assignment exists\n2036717 - Valid AlertmanagerConfig custom resource with valid a mute time interval definition is rejected\n2036826 - `oc adm prune deployments` can prune the RC/RS\n2036827 - The ccoctl still accepts CredentialsRequests without ServiceAccounts on GCP platform\n2036861 - kube-apiserver is degraded while enable multitenant\n2036937 - Command line tools page shows wrong download ODO link\n2036940 - oc registry login fails if the file is empty or stdout\n2036951 - [cluster-csi-snapshot-controller-operator] proxy settings is being injected in container\n2036989 - Route URL copy to clipboard button wraps to a separate line by itself\n2036990 - ZTP \"DU Done inform policy\" never becomes compliant on multi-node clusters\n2036993 - Machine API components should use Go lang version 1.17\n2037036 - The tuned profile goes into degraded status and ksm.service is displayed in the log. \n2037061 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cluster-api\n2037073 - Alertmanager container fails to start because of startup probe never being successful\n2037075 - Builds do not support CSI volumes\n2037167 - Some log level in ibm-vpc-block-csi-controller are hard code\n2037168 - IBM-specific Deployment manifest for package-server-manager should be excluded on non-IBM cluster-profiles\n2037182 - PingSource badge color is not matched with knativeEventing color\n2037203 - \"Running VMs\" card is too small in Virtualization Overview\n2037209 - [IPI on Alibabacloud] worker nodes are put in the default resource group unexpectedly\n2037237 - Add \"This is a CD-ROM boot source\" to customize wizard\n2037241 - default TTL for noobaa cache buckets should be 0\n2037246 - Cannot customize auto-update boot source\n2037276 - [IBMCLOUD] vpc-node-label-updater may fail to label nodes appropriately\n2037288 - Remove stale image reference\n2037331 - Ensure the ccoctl behaviors are similar between aws and gcp on the existing resources\n2037483 - Rbacs for Pods within the CBO should be more restrictive\n2037484 - Bump dependencies to k8s 1.23\n2037554 - Mismatched wave number error message should include the wave numbers that are in conflict\n2037622 - [4.10-Alibaba CSI driver][Restore size for volumesnapshot/volumesnapshotcontent is showing as 0 in Snapshot feature for Alibaba platform]\n2037635 - impossible to configure custom certs for default console route in ingress config\n2037637 - configure custom certificate for default console route doesn\u0027t take effect for OCP \u003e= 4.8\n2037638 - Builds do not support CSI volumes as volume sources\n2037664 - text formatting issue in Installed Operators list table\n2037680 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037689 - [IPI on Alibabacloud] sometimes operator \u0027cloud-controller-manager\u0027 tells empty VERSION, due to conflicts on listening tcp :8080\n2037801 - Serverless installation is failing on CI jobs for e2e tests\n2037813 - Metal Day 1 Networking - networkConfig Field Only Accepts String Format\n2037856 - use lease for leader election\n2037891 - 403 Forbidden error shows for all the graphs in each grafana dashboard after upgrade from 4.9 to 4.10\n2037903 - Alibaba Cloud: delete-ram-user requires the credentials-requests\n2037904 - upgrade operator deployment failed due to memory limit too low for manager container\n2038021 - [4.10-Alibaba CSI driver][Default volumesnapshot class is not added/present after successful cluster installation]\n2038034 - non-privileged user cannot see auto-update boot source\n2038053 - Bump dependencies to k8s 1.23\n2038088 - Remove ipa-downloader references\n2038160 - The `default` project missed the annotation : openshift.io/node-selector: \"\"\n2038166 - Starting from Go 1.17 invalid certificates will render a cluster non-functional\n2038196 - must-gather is missing collecting some metal3 resources\n2038240 - Error when configuring a file using permissions bigger than decimal 511 (octal 0777)\n2038253 - Validator Policies are long lived\n2038272 - Failures to build a PreprovisioningImage are not reported\n2038384 - Azure Default Instance Types are Incorrect\n2038389 - Failing test: [sig-arch] events should not repeat pathologically\n2038412 - Import page calls the git file list unnecessarily twice from GitHub/GitLab/Bitbucket\n2038465 - Upgrade chromedriver to 90.x to support Mac M1 chips\n2038481 - kube-controller-manager-guard and openshift-kube-scheduler-guard pods being deleted and restarted on a cordoned node when drained\n2038596 - Auto egressIP for OVN cluster on GCP: After egressIP object is deleted, egressIP still takes effect\n2038663 - update kubevirt-plugin OWNERS\n2038691 - [AUTH-8] Panic on user login when the user belongs to a group in the IdP side and the group already exists via \"oc adm groups new\"\n2038705 - Update ptp reviewers\n2038761 - Open Observe-\u003eTargets page, wait for a while, page become blank\n2038768 - All the filters on the Observe-\u003eTargets page can\u0027t work\n2038772 - Some monitors failed to display on Observe-\u003eTargets page\n2038793 - [SDN EgressIP] After reboot egress node, the egressip was lost from egress node\n2038827 - should add user containers in /etc/subuid and /etc/subgid to support run pods in user namespaces\n2038832 - New templates for centos stream8 are missing registry suggestions in create vm wizard\n2038840 - [SDN EgressIP]cloud-network-config-controller pod was CrashLoopBackOff after some operation\n2038864 - E2E tests fail because multi-hop-net was not created\n2038879 - All Builds are getting listed in DeploymentConfig under workloads on OpenShift Console\n2038934 - CSI driver operators should use the trusted CA bundle when cluster proxy is configured\n2038968 - Move feature gates from a carry patch to openshift/api\n2039056 - Layout issue with breadcrumbs on API explorer page\n2039057 - Kind column is not wide enough in API explorer page\n2039064 - Bulk Import e2e test flaking at a high rate\n2039065 - Diagnose and fix Bulk Import e2e test that was previously disabled\n2039085 - Cloud credential operator configuration failing to apply in hypershift/ROKS clusters\n2039099 - [OVN EgressIP GCP] After reboot egress node, egressip that was previously assigned got lost\n2039109 - [FJ OCP4.10 Bug]: startironic.sh failed to pull the image of image-customization container when behind a proxy\n2039119 - CVO hotloops on Service openshift-monitoring/cluster-monitoring-operator\n2039170 - [upgrade]Error shown on registry operator \"missing the cloud-provider-config configmap\" after upgrade\n2039227 - Improve image customization server parameter passing during installation\n2039241 - Improve image customization server parameter passing during installation\n2039244 - Helm Release revision history page crashes the UI\n2039294 - SDN controller metrics cannot be consumed correctly by prometheus\n2039311 - oc Does Not Describe Build CSI Volumes\n2039315 - Helm release list page should only fetch secrets for deployed charts\n2039321 - SDN controller metrics are not being consumed by prometheus\n2039330 - Create NMState button doesn\u0027t work in OperatorHub web console\n2039339 - cluster-ingress-operator should report Unupgradeable if user has modified the aws resources annotations\n2039345 - CNO does not verify the minimum MTU value for IPv6/dual-stack clusters. \n2039359 - `oc adm prune deployments` can\u0027t prune the RS where the associated Deployment no longer exists\n2039382 - gather_metallb_logs does not have execution permission\n2039406 - logout from rest session after vsphere operator sync is finished\n2039408 - Add GCP region northamerica-northeast2 to allowed regions\n2039414 - Cannot see the weights increased for NodeAffinity, InterPodAffinity, TaintandToleration\n2039425 - No need to set KlusterletAddonConfig CR applicationManager-\u003eenabled: true in RAN ztp deployment\n2039491 - oc - git:// protocol used in unit tests\n2039516 - Bump OVN to ovn21.12-21.12.0-25\n2039529 - Project Dashboard Resource Quotas Card empty state test flaking at a high rate\n2039534 - Diagnose and fix Project Dashboard Resource Quotas Card test that was previously disabled\n2039541 - Resolv-prepender script duplicating entries\n2039586 - [e2e] update centos8 to centos stream8\n2039618 - VM created from SAP HANA template leads to 404 page if leave one network parameter empty\n2039619 - [AWS] In tree provisioner storageclass aws disk type should contain \u0027gp3\u0027 and csi provisioner storageclass default aws disk type should be \u0027gp3\u0027\n2039670 - Create PDBs for control plane components\n2039678 - Page goes blank when create image pull secret\n2039689 - [IPI on Alibabacloud] Pay-by-specification NAT is no longer supported\n2039743 - React missing key warning when open operator hub detail page (and maybe others as well)\n2039756 - React missing key warning when open KnativeServing details\n2039770 - Observe dashboard doesn\u0027t react on time-range changes after browser reload when perspective is changed in another tab\n2039776 - Observe dashboard shows nothing if the URL links to an non existing dashboard\n2039781 - [GSS] OBC is not visible by admin of a Project on Console\n2039798 - Contextual binding with Operator backed service creates visual connector instead of Service binding connector\n2039868 - Insights Advisor widget is not in the disabled state when the Insights Operator is disabled\n2039880 - Log level too low for control plane metrics\n2039919 - Add E2E test for router compression feature\n2039981 - ZTP for standard clusters installs stalld on master nodes\n2040132 - Flag --port has been deprecated, This flag has no effect now and will be removed in v1.24. You can use --secure-port instead\n2040136 - external-dns-operator pod keeps restarting and reports error: timed out waiting for cache to be synced\n2040143 - [IPI on Alibabacloud] suggest to remove region \"cn-nanjing\" or provide better error message\n2040150 - Update ConfigMap keys for IBM HPCS\n2040160 - [IPI on Alibabacloud] installation fails when region does not support pay-by-bandwidth\n2040285 - Bump build-machinery-go for console-operator to pickup change in yaml-patch repository\n2040357 - bump OVN to ovn-2021-21.12.0-11.el8fdp\n2040376 - \"unknown instance type\" error for supported m6i.xlarge instance\n2040394 - Controller: enqueue the failed configmap till services update\n2040467 - Cannot build ztp-site-generator container image\n2040504 - Change AWS EBS GP3 IOPS in MachineSet doesn\u0027t take affect in OpenShift 4\n2040521 - RouterCertsDegraded certificate could not validate route hostname v4-0-config-system-custom-router-certs.apps\n2040535 - Auto-update boot source is not available in customize wizard\n2040540 - ovs hardware offload: ovsargs format error when adding vf netdev name\n2040603 - rhel worker scaleup playbook failed because missing some dependency of podman\n2040616 - rolebindings page doesn\u0027t load for normal users\n2040620 - [MAPO] Error pulling MAPO image on installation\n2040653 - Topology sidebar warns that another component is updated while rendering\n2040655 - User settings update fails when selecting application in topology sidebar\n2040661 - Different react warnings about updating state on unmounted components when leaving topology\n2040670 - Permafailing CI job: periodic-ci-openshift-release-master-nightly-4.10-e2e-gcp-libvirt-cert-rotation\n2040671 - [Feature:IPv6DualStack] most tests are failing in dualstack ipi\n2040694 - Three upstream HTTPClientConfig struct fields missing in the operator\n2040705 - Du policy for standard cluster runs the PTP daemon on masters and workers\n2040710 - cluster-baremetal-operator cannot update BMC subscription CR\n2040741 - Add CI test(s) to ensure that metal3 components are deployed in vSphere, OpenStack and None platforms\n2040782 - Import YAML page blocks input with more then one generateName attribute\n2040783 - The Import from YAML summary page doesn\u0027t show the resource name if created via generateName attribute\n2040791 - Default PGT policies must be \u0027inform\u0027 to integrate with the Lifecycle Operator\n2040793 - Fix snapshot e2e failures\n2040880 - do not block upgrades if we can\u0027t connect to vcenter\n2041087 - MetalLB: MetalLB CR is not upgraded automatically from 4.9 to 4.10\n2041093 - autounattend.xml missing\n2041204 - link to templates in virtualization-cluster-overview inventory card is to all templates\n2041319 - [IPI on Alibabacloud] installation in region \"cn-shanghai\" failed, due to \"Resource alicloud_vswitch CreateVSwitch Failed...InvalidCidrBlock.Overlapped\"\n2041326 - Should bump cluster-kube-descheduler-operator to kubernetes version V1.23\n2041329 - aws and gcp CredentialsRequest manifests missing ServiceAccountNames list for cloud-network-config-controller\n2041361 - [IPI on Alibabacloud] Disable session persistence and removebBandwidth peak of listener\n2041441 - Provision volume with size 3000Gi even if sizeRange: \u0027[10-2000]GiB\u0027 in storageclass on IBM cloud\n2041466 - Kubedescheduler version is missing from the operator logs\n2041475 - React components should have a (mostly) unique name in react dev tools to simplify code analyses\n2041483 - MetallB: quay.io/openshift/origin-kube-rbac-proxy:4.10 deploy Metallb CR is missing (controller and speaker pods)\n2041492 - Spacing between resources in inventory card is too small\n2041509 - GCP Cloud provider components should use K8s 1.23 dependencies\n2041510 - cluster-baremetal-operator doesn\u0027t run baremetal-operator\u0027s subscription webhook\n2041541 - audit: ManagedFields are dropped using API not annotation\n2041546 - ovnkube: set election timer at RAFT cluster creation time\n2041554 - use lease for leader election\n2041581 - KubeDescheduler operator log shows \"Use of insecure cipher detected\"\n2041583 - etcd and api server cpu mask interferes with a guaranteed workload\n2041598 - Including CA bundle in Azure Stack cloud config causes MCO failure\n2041605 - Dynamic Plugins: discrepancy in proxy alias documentation/implementation\n2041620 - bundle CSV alm-examples does not parse\n2041641 - Fix inotify leak and kubelet retaining memory\n2041671 - Delete templates leads to 404 page\n2041694 - [IPI on Alibabacloud] installation fails when region does not support the cloud_essd disk category\n2041734 - ovs hwol: VFs are unbind when switchdev mode is enabled\n2041750 - [IPI on Alibabacloud] trying \"create install-config\" with region \"cn-wulanchabu (China (Ulanqab))\" (or \"ap-southeast-6 (Philippines (Manila))\", \"cn-guangzhou (China (Guangzhou))\") failed due to invalid endpoint\n2041763 - The Observe \u003e Alerting pages no longer have their default sort order applied\n2041830 - CI: ovn-kubernetes-master-e2e-aws-ovn-windows is broken\n2041854 - Communities / Local prefs are applied to all the services regardless of the pool, and only one community is applied\n2041882 - cloud-network-config operator can\u0027t work normal on GCP workload identity cluster\n2041888 - Intermittent incorrect build to run correlation, leading to run status updates applied to wrong build, builds stuck in non-terminal phases\n2041926 - [IPI on Alibabacloud] Installer ignores public zone when it does not exist\n2041971 - [vsphere] Reconciliation of mutating webhooks didn\u0027t happen\n2041989 - CredentialsRequest manifests being installed for ibm-cloud-managed profile\n2041999 - [PROXY] external dns pod cannot recognize custom proxy CA\n2042001 - unexpectedly found multiple load balancers\n2042029 - kubedescheduler fails to install completely\n2042036 - [IBMCLOUD] \"openshift-install explain installconfig.platform.ibmcloud\" contains not yet supported custom vpc parameters\n2042049 - Seeing warning related to unrecognized feature gate in kubescheduler \u0026 KCM logs\n2042059 - update discovery burst to reflect lots of CRDs on openshift clusters\n2042069 - Revert toolbox to rhcos-toolbox\n2042169 - Can not delete egressnetworkpolicy in Foreground propagation\n2042181 - MetalLB: User should not be allowed add same bgp advertisement twice in BGP address pool\n2042265 - [IBM]\"--scale-down-utilization-threshold\" doesn\u0027t work on IBMCloud\n2042274 - Storage API should be used when creating a PVC\n2042315 - Baremetal IPI deployment with IPv6 control plane and disabled provisioning network fails as the nodes do not pass introspection\n2042366 - Lifecycle hooks should be independently managed\n2042370 - [IPI on Alibabacloud] installer panics when the zone does not have an enhanced NAT gateway\n2042382 - [e2e][automation] CI takes more then 2 hours to run\n2042395 - Add prerequisites for active health checks test\n2042438 - Missing rpms in openstack-installer image\n2042466 - Selection does not happen when switching from Topology Graph to List View\n2042493 - No way to verify if IPs with leading zeros are still valid in the apiserver\n2042567 - insufficient info on CodeReady Containers configuration\n2042600 - Alone, the io.kubernetes.cri-o.Devices option poses a security risk\n2042619 - Overview page of the console is broken for hypershift clusters\n2042655 - [IPI on Alibabacloud] cluster becomes unusable if there is only one kube-apiserver pod running\n2042711 - [IBMCloud] Machine Deletion Hook cannot work on IBMCloud\n2042715 - [AliCloud] Machine Deletion Hook cannot work on AliCloud\n2042770 - [IPI on Alibabacloud] with vpcID \u0026 vswitchIDs specified, the installer would still try creating NAT gateway unexpectedly\n2042829 - Topology performance: HPA was fetched for each Deployment (Pod Ring)\n2042851 - Create template from SAP HANA template flow - VM is created instead of a new template\n2042906 - Edit machineset with same machine deletion hook name succeed\n2042960 - azure-file CI fails with \"gid(0) in storageClass and pod fsgroup(1000) are not equal\"\n2043003 - [IPI on Alibabacloud] \u0027destroy cluster\u0027 of a failed installation (bug2041694) stuck after \u0027stage=Nat gateways\u0027\n2043042 - [Serial] [sig-auth][Feature:OAuthServer] [RequestHeaders] [IdP] test RequestHeaders IdP [Suite:openshift/conformance/serial]\n2043043 - Cluster Autoscaler should use K8s 1.23 dependencies\n2043064 - Topology performance: Unnecessary rerenderings in topology nodes (unchanged mobx props)\n2043078 - Favorite system projects not visible in the project selector after toggling \"Show default projects\". \n2043117 - Recommended operators links are erroneously treated as external\n2043130 - Update CSI sidecars to the latest release for 4.10\n2043234 - Missing validation when creating several BGPPeers with the same peerAddress\n2043240 - Sync openshift/descheduler with sigs.k8s.io/descheduler\n2043254 - crio does not bind the security profiles directory\n2043296 - Ignition fails when reusing existing statically-keyed LUKS volume\n2043297 - [4.10] Bootimage bump tracker\n2043316 - RHCOS VM fails to boot on Nutanix AOS\n2043446 - Rebase aws-efs-utils to the latest upstream version. \n2043556 - Add proper ci-operator configuration to ironic and ironic-agent images\n2043577 - DPU network operator\n2043651 - Fix bug with exp. backoff working correcly when setting nextCheck in vsphere operator\n2043675 - Too many machines deleted by cluster autoscaler when scaling down\n2043683 - Revert bug 2039344 Ignoring IPv6 addresses against etcd cert validation\n2043709 - Logging flags no longer being bound to command line\n2043721 - Installer bootstrap hosts using outdated kubelet containing bugs\n2043731 - [IBMCloud] terraform outputs missing for ibmcloud bootstrap and worker ips for must-gather\n2043759 - Bump cluster-ingress-operator to k8s.io/api 1.23\n2043780 - Bump router to k8s.io/api 1.23\n2043787 - Bump cluster-dns-operator to k8s.io/api 1.23\n2043801 - Bump CoreDNS to k8s.io/api 1.23\n2043802 - EgressIP stopped working after single egressIP for a netnamespace is switched to the other node of HA pair after the first egress node is shutdown\n2043961 - [OVN-K] If pod creation fails, retry doesn\u0027t work as expected. \n2044201 - Templates golden image parameters names should be supported\n2044244 - Builds are failing after upgrading the cluster with builder image [jboss-webserver-5/jws56-openjdk8-openshift-rhel8]\n2044248 - [IBMCloud][vpc.block.csi.ibm.io]Cluster common user use the storageclass without parameter \u201ccsi.storage.k8s.io/fstype\u201d create pvc,pod successfully but write data to the pod\u0027s volume failed of \"Permission denied\"\n2044303 - [ovn][cloud-network-config-controller] cloudprivateipconfigs ips were left after deleting egressip objects\n2044347 - Bump to kubernetes 1.23.3\n2044481 - collect sharedresource cluster scoped instances with must-gather\n2044496 - Unable to create hardware events subscription - failed to add finalizers\n2044628 - CVE-2022-21673 grafana: Forward OAuth Identity Token can allow users to access some data sources\n2044680 - Additional libovsdb performance and resource consumption fixes\n2044704 - Observe \u003e Alerting pages should not show runbook links in 4.10\n2044717 - [e2e] improve tests for upstream test environment\n2044724 - Remove namespace column on VM list page when a project is selected\n2044745 - Upgrading cluster from 4.9 to 4.10 on Azure (ARO) causes the cloud-network-config-controller pod to CrashLoopBackOff\n2044808 - machine-config-daemon-pull.service: use `cp` instead of `cat` when extracting MCD in OKD\n2045024 - CustomNoUpgrade alerts should be ignored\n2045112 - vsphere-problem-detector has missing rbac rules for leases\n2045199 - SnapShot with Disk Hot-plug hangs\n2045561 - Cluster Autoscaler should use the same default Group value as Cluster API\n2045591 - Reconciliation of aws pod identity mutating webhook did not happen\n2045849 - Add Sprint 212 translations\n2045866 - MCO Operator pod spam \"Error creating event\" warning messages in 4.10\n2045878 - Sync upstream 1.16.0 downstream; includes hybrid helm plugin\n2045916 - [IBMCloud] Default machine profile in installer is unreliable\n2045927 - [FJ OCP4.10 Bug]: Podman failed to pull the IPA image due to the loss of proxy environment\n2046025 - [IPI on Alibabacloud] pre-configured alicloud DNS private zone is deleted after destroying cluster, please clarify\n2046137 - oc output for unknown commands is not human readable\n2046296 - When creating multiple consecutive egressIPs on GCP not all of them get assigned to the instance\n2046297 - Bump DB reconnect timeout\n2046517 - In Notification drawer, the \"Recommendations\" header shows when there isn\u0027t any recommendations\n2046597 - Observe \u003e Targets page may show the wrong service monitor is multiple monitors have the same namespace \u0026 label selectors\n2046626 - Allow setting custom metrics for Ansible-based Operators\n2046683 - [AliCloud]\"--scale-down-utilization-threshold\" doesn\u0027t work on AliCloud\n2047025 - Installation fails because of Alibaba CSI driver operator is degraded\n2047190 - Bump Alibaba CSI driver for 4.10\n2047238 - When using communities and localpreferences together, only localpreference gets applied\n2047255 - alibaba: resourceGroupID not found\n2047258 - [aws-usgov] fatal error occurred if AMI is not provided for AWS GovCloud regions\n2047317 - Update HELM OWNERS files under Dev Console\n2047455 - [IBM Cloud] Update custom image os type\n2047496 - Add image digest feature\n2047779 - do not degrade cluster if storagepolicy creation fails\n2047927 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2047929 - use lease for leader election\n2047975 - [sig-network][Feature:Router] The HAProxy router should override the route host for overridden domains with a custom value [Skipped:Disconnected] [Suite:openshift/conformance/parallel]\n2048046 - New route annotation to show another URL or hide topology URL decorator doesn\u0027t work for Knative Services\n2048048 - Application tab in User Preferences dropdown menus are too wide. \n2048050 - Topology list view items are not highlighted on keyboard navigation\n2048117 - [IBM]Shouldn\u0027t change status.storage.bucket and status.storage.resourceKeyCRN when update sepc.stroage,ibmcos with invalid value\n2048413 - Bond CNI: Failed to attach Bond NAD to pod\n2048443 - Image registry operator panics when finalizes config deletion\n2048478 - [alicloud] CCM deploys alibaba-cloud-controller-manager from quay.io/openshift/origin-*\n2048484 - SNO: cluster-policy-controller failed to start due to missing serving-cert/tls.crt\n2048598 - Web terminal view is broken\n2048836 - ovs-configure mis-detecting the ipv6 status on IPv4 only cluster causing Deployment failure\n2048891 - Topology page is crashed\n2049003 - 4.10: [IBMCloud] ibm-vpc-block-csi-node does not specify an update strategy, only resource requests, or priority class\n2049043 - Cannot create VM from template\n2049156 - \u0027oc get project\u0027 caused \u0027Observed a panic: cannot deep copy core.NamespacePhase\u0027 when AllRequestBodies is used\n2049886 - Placeholder bug for OCP 4.10.0 metadata release\n2049890 - Warning annotation for pods with cpu requests or limits on single-node OpenShift cluster without workload partitioning\n2050189 - [aws-efs-csi-driver] Merge upstream changes since v1.3.2\n2050190 - [aws-ebs-csi-driver] Merge upstream changes since v1.2.0\n2050227 - Installation on PSI fails with: \u0027openstack platform does not have the required standard-attr-tag network extension\u0027\n2050247 - Failing test in periodics: [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy] [Skipped:Network/OVNKubernetes] [Suite:openshift/conformance/parallel] [Suite:k8s]\n2050250 - Install fails to bootstrap, complaining about DefragControllerDegraded and sad members\n2050310 - ContainerCreateError when trying to launch large (\u003e500) numbers of pods across nodes\n2050370 - alert data for burn budget needs to be updated to prevent regression\n2050393 - ZTP missing support for local image registry and custom machine config\n2050557 - Can not push images to image-registry when enabling KMS encryption in AlibabaCloud\n2050737 - Remove metrics and events for master port offsets\n2050801 - Vsphere upi tries to access vsphere during manifests generation phase\n2050883 - Logger object in LSO does not log source location accurately\n2051692 - co/image-registry is degrade because ImagePrunerDegraded: Job has reached the specified backoff limit\n2052062 - Whereabouts should implement client-go 1.22+\n2052125 - [4.10] Crio appears to be coredumping in some scenarios\n2052210 - [aws-c2s] kube-apiserver crashloops due to missing cloud config\n2052339 - Failing webhooks will block an upgrade to 4.10 mid-way through the upgrade. \n2052458 - [IBM Cloud] ibm-vpc-block-csi-controller does not specify an update strategy, priority class, or only resource requests\n2052598 - kube-scheduler should use configmap lease\n2052599 - kube-controller-manger should use configmap lease\n2052600 - Failed to scaleup RHEL machine against OVN cluster due to jq tool is required by configure-ovs.sh\n2052609 - [vSphere CSI driver Operator] RWX volumes counts metrics `vsphere_rwx_volumes_total` not valid\n2052611 - MetalLB: BGPPeer object does not have ability to set ebgpMultiHop\n2052612 - MetalLB: Webhook Validation: Two BGPPeers instances can have different router ID set. \n2052644 - Infinite OAuth redirect loop post-upgrade to 4.10.0-rc.1\n2052666 - [4.10.z] change gitmodules to rhcos-4.10 branch\n2052756 - [4.10] PVs are not being cleaned up after PVC deletion\n2053175 - oc adm catalog mirror throws \u0027missing signature key\u0027 error when using file://local/index\n2053218 - ImagePull fails with error \"unable to pull manifest from example.com/busy.box:v5 invalid reference format\"\n2053252 - Sidepanel for Connectors/workloads in topology shows invalid tabs\n2053268 - inability to detect static lifecycle failure\n2053314 - requestheader IDP test doesn\u0027t wait for cleanup, causing high failure rates\n2053323 - OpenShift-Ansible BYOH Unit Tests are Broken\n2053339 - Remove dev preview badge from IBM FlashSystem deployment windows\n2053751 - ztp-site-generate container is missing convenience entrypoint\n2053945 - [4.10] Failed to apply sriov policy on intel nics\n2054109 - Missing \"app\" label\n2054154 - RoleBinding in project without subject is causing \"Project access\" page to fail\n2054244 - Latest pipeline run should be listed on the top of the pipeline run list\n2054288 - console-master-e2e-gcp-console is broken\n2054562 - DPU network operator 4.10 branch need to sync with master\n2054897 - Unable to deploy hw-event-proxy operator\n2055193 - e2e-metal-ipi-serial-ovn-ipv6 is failing frequently\n2055358 - Summary Interval Hardcoded in PTP Operator if Set in the Global Body Instead of Command Line\n2055371 - Remove Check which enforces summary_interval must match logSyncInterval\n2055689 - [ibm]Operator storage PROGRESSING and DEGRADED is true during fresh install for ocp4.11\n2055894 - CCO mint mode will not work for Azure after sunsetting of Active Directory Graph API\n2056441 - AWS EFS CSI driver should use the trusted CA bundle when cluster proxy is configured\n2056479 - ovirt-csi-driver-node pods are crashing intermittently\n2056572 - reconcilePrecaching error: cannot list resource \"clusterserviceversions\" in API group \"operators.coreos.com\" at the cluster scope\"\n2056629 - [4.10] EFS CSI driver can\u0027t unmount volumes with \"wait: no child processes\"\n2056878 - (dummy bug) ovn-kubernetes ExternalTrafficPolicy still SNATs\n2056928 - Ingresscontroller LB scope change behaviour differs for different values of aws-load-balancer-internal annotation\n2056948 - post 1.23 rebase: regression in service-load balancer reliability\n2057438 - Service Level Agreement (SLA) always show \u0027Unknown\u0027\n2057721 - Fix Proxy support in RHACM 2.4.2\n2057724 - Image creation fails when NMstateConfig CR is empty\n2058641 - [4.10] Pod density test causing problems when using kube-burner\n2059761 - 4.9.23-s390x-machine-os-content manifest invalid when mirroring content for disconnected install\n2060610 - Broken access to public images: Unable to connect to the server: no basic auth credentials\n2060956 - service domain can\u0027t be resolved when networkpolicy is used in OCP 4.10-rc\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2014-3577\nhttps://access.redhat.com/security/cve/CVE-2016-10228\nhttps://access.redhat.com/security/cve/CVE-2017-14502\nhttps://access.redhat.com/security/cve/CVE-2018-20843\nhttps://access.redhat.com/security/cve/CVE-2018-1000858\nhttps://access.redhat.com/security/cve/CVE-2019-8625\nhttps://access.redhat.com/security/cve/CVE-2019-8710\nhttps://access.redhat.com/security/cve/CVE-2019-8720\nhttps://access.redhat.com/security/cve/CVE-2019-8743\nhttps://access.redhat.com/security/cve/CVE-2019-8764\nhttps://access.redhat.com/security/cve/CVE-2019-8766\nhttps://access.redhat.com/security/cve/CVE-2019-8769\nhttps://access.redhat.com/security/cve/CVE-2019-8771\nhttps://access.redhat.com/security/cve/CVE-2019-8782\nhttps://access.redhat.com/security/cve/CVE-2019-8783\nhttps://access.redhat.com/security/cve/CVE-2019-8808\nhttps://access.redhat.com/security/cve/CVE-2019-8811\nhttps://access.redhat.com/security/cve/CVE-2019-8812\nhttps://access.redhat.com/security/cve/CVE-2019-8813\nhttps://access.redhat.com/security/cve/CVE-2019-8814\nhttps://access.redhat.com/security/cve/CVE-2019-8815\nhttps://access.redhat.com/security/cve/CVE-2019-8816\nhttps://access.redhat.com/security/cve/CVE-2019-8819\nhttps://access.redhat.com/security/cve/CVE-2019-8820\nhttps://access.redhat.com/security/cve/CVE-2019-8823\nhttps://access.redhat.com/security/cve/CVE-2019-8835\nhttps://access.redhat.com/security/cve/CVE-2019-8844\nhttps://access.redhat.com/security/cve/CVE-2019-8846\nhttps://access.redhat.com/security/cve/CVE-2019-9169\nhttps://access.redhat.com/security/cve/CVE-2019-13050\nhttps://access.redhat.com/security/cve/CVE-2019-13627\nhttps://access.redhat.com/security/cve/CVE-2019-14889\nhttps://access.redhat.com/security/cve/CVE-2019-15903\nhttps://access.redhat.com/security/cve/CVE-2019-19906\nhttps://access.redhat.com/security/cve/CVE-2019-20454\nhttps://access.redhat.com/security/cve/CVE-2019-20807\nhttps://access.redhat.com/security/cve/CVE-2019-25013\nhttps://access.redhat.com/security/cve/CVE-2020-1730\nhttps://access.redhat.com/security/cve/CVE-2020-3862\nhttps://access.redhat.com/security/cve/CVE-2020-3864\nhttps://access.redhat.com/security/cve/CVE-2020-3865\nhttps://access.redhat.com/security/cve/CVE-2020-3867\nhttps://access.redhat.com/security/cve/CVE-2020-3868\nhttps://access.redhat.com/security/cve/CVE-2020-3885\nhttps://access.redhat.com/security/cve/CVE-2020-3894\nhttps://access.redhat.com/security/cve/CVE-2020-3895\nhttps://access.redhat.com/security/cve/CVE-2020-3897\nhttps://access.redhat.com/security/cve/CVE-2020-3899\nhttps://access.redhat.com/security/cve/CVE-2020-3900\nhttps://access.redhat.com/security/cve/CVE-2020-3901\nhttps://access.redhat.com/security/cve/CVE-2020-3902\nhttps://access.redhat.com/security/cve/CVE-2020-8927\nhttps://access.redhat.com/security/cve/CVE-2020-9802\nhttps://access.redhat.com/security/cve/CVE-2020-9803\nhttps://access.redhat.com/security/cve/CVE-2020-9805\nhttps://access.redhat.com/security/cve/CVE-2020-9806\nhttps://access.redhat.com/security/cve/CVE-2020-9807\nhttps://access.redhat.com/security/cve/CVE-2020-9843\nhttps://access.redhat.com/security/cve/CVE-2020-9850\nhttps://access.redhat.com/security/cve/CVE-2020-9862\nhttps://access.redhat.com/security/cve/CVE-2020-9893\nhttps://access.redhat.com/security/cve/CVE-2020-9894\nhttps://access.redhat.com/security/cve/CVE-2020-9895\nhttps://access.redhat.com/security/cve/CVE-2020-9915\nhttps://access.redhat.com/security/cve/CVE-2020-9925\nhttps://access.redhat.com/security/cve/CVE-2020-9952\nhttps://access.redhat.com/security/cve/CVE-2020-10018\nhttps://access.redhat.com/security/cve/CVE-2020-11793\nhttps://access.redhat.com/security/cve/CVE-2020-13434\nhttps://access.redhat.com/security/cve/CVE-2020-14391\nhttps://access.redhat.com/security/cve/CVE-2020-15358\nhttps://access.redhat.com/security/cve/CVE-2020-15503\nhttps://access.redhat.com/security/cve/CVE-2020-25660\nhttps://access.redhat.com/security/cve/CVE-2020-25677\nhttps://access.redhat.com/security/cve/CVE-2020-27618\nhttps://access.redhat.com/security/cve/CVE-2020-27781\nhttps://access.redhat.com/security/cve/CVE-2020-29361\nhttps://access.redhat.com/security/cve/CVE-2020-29362\nhttps://access.redhat.com/security/cve/CVE-2020-29363\nhttps://access.redhat.com/security/cve/CVE-2021-3121\nhttps://access.redhat.com/security/cve/CVE-2021-3326\nhttps://access.redhat.com/security/cve/CVE-2021-3449\nhttps://access.redhat.com/security/cve/CVE-2021-3450\nhttps://access.redhat.com/security/cve/CVE-2021-3516\nhttps://access.redhat.com/security/cve/CVE-2021-3517\nhttps://access.redhat.com/security/cve/CVE-2021-3518\nhttps://access.redhat.com/security/cve/CVE-2021-3520\nhttps://access.redhat.com/security/cve/CVE-2021-3521\nhttps://access.redhat.com/security/cve/CVE-2021-3537\nhttps://access.redhat.com/security/cve/CVE-2021-3541\nhttps://access.redhat.com/security/cve/CVE-2021-3733\nhttps://access.redhat.com/security/cve/CVE-2021-3749\nhttps://access.redhat.com/security/cve/CVE-2021-20305\nhttps://access.redhat.com/security/cve/CVE-2021-21684\nhttps://access.redhat.com/security/cve/CVE-2021-22946\nhttps://access.redhat.com/security/cve/CVE-2021-22947\nhttps://access.redhat.com/security/cve/CVE-2021-25215\nhttps://access.redhat.com/security/cve/CVE-2021-27218\nhttps://access.redhat.com/security/cve/CVE-2021-30666\nhttps://access.redhat.com/security/cve/CVE-2021-30761\nhttps://access.redhat.com/security/cve/CVE-2021-30762\nhttps://access.redhat.com/security/cve/CVE-2021-33928\nhttps://access.redhat.com/security/cve/CVE-2021-33929\nhttps://access.redhat.com/security/cve/CVE-2021-33930\nhttps://access.redhat.com/security/cve/CVE-2021-33938\nhttps://access.redhat.com/security/cve/CVE-2021-36222\nhttps://access.redhat.com/security/cve/CVE-2021-37750\nhttps://access.redhat.com/security/cve/CVE-2021-39226\nhttps://access.redhat.com/security/cve/CVE-2021-41190\nhttps://access.redhat.com/security/cve/CVE-2021-43813\nhttps://access.redhat.com/security/cve/CVE-2021-44716\nhttps://access.redhat.com/security/cve/CVE-2021-44717\nhttps://access.redhat.com/security/cve/CVE-2022-0532\nhttps://access.redhat.com/security/cve/CVE-2022-21673\nhttps://access.redhat.com/security/cve/CVE-2022-24407\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYipqONzjgjWX9erEAQjQcBAAgWTjA6Q2NgqfVf63ZpJF1jPurZLPqxDL\n0in/5+/wqWaiQ6yk7wM3YBZgviyKnAMCVdrLsaR7R77BvfJcTE3W/fzogxpp6Rne\neGT1PTgQRecrSIn+WG4gGSteavTULWOIoPvUiNpiy3Y7fFgjFdah+Nyx3Xd+xehM\nCEswylOd6Hr03KZ1tS3XL3kGL2botha48Yls7FzDFbNcy6TBAuycmQZifKu8mHaF\naDAupVJinDnnVgACeS6CnZTAD+Vrx5W7NIisteXv4x5Hy+jBIUHr8Yge3oxYoFnC\nY/XmuOw2KilLZuqFe+KHig45qT+FmNU8E1egcGpNWvmS8hGZfiG1jEQAqDPbZHxp\nsQAQZLQyz3TvXa29vp4QcsUuMxndIOi+QaK75JmqE06MqMIlFDYpr6eQOIgIZvFO\nRDZU/qvBjh56ypInoqInBf8KOQMy6eO+r6nFbMGcAfucXmz0EVcSP1oFHAoA1nWN\nrs1Qz/SO4CvdPERxcr1MLuBLggZ6iqGmHKk5IN0SwcndBHaVJ3j/LBv9m7wBYVry\nbSvojBDYx5ricbTwB5sGzu7oH5yVl813FA9cjkFpEhBiMtTfI+DKC8ssoRYNHd5Z\n7gLW6KWPUIDuCIiiioPZAJMyvJ0IMrNDoQ0lhqPeV7PFdlRhT95M/DagUZOpPVuT\nb5PUYUBIZLc=\n=GUDA\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.6.0 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Solution:\n\nFor details on how to install and use MTC, refer to:\n\nhttps://docs.openshift.com/container-platform/4.8/migration_toolkit_for_con\ntainers/installing-mtc.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1878824 - Web console is not accessible when deployed on OpenShift cluster on IBM Cloud\n1887526 - \"Stage\" pods fail when migrating from classic OpenShift source cluster on IBM Cloud with block storage\n1899562 - MigMigration custom resource does not display an error message when a migration fails because of volume mount error\n1936886 - Service account token of existing remote cluster cannot be updated by using the web console\n1936894 - \"Ready\" status of MigHook and MigPlan custom resources is not synchronized automatically\n1949117 - \"Migration plan resources\" page displays a permanent error message when a migration plan is deleted from the backend\n1951869 - MigPlan custom resource does not detect invalid source cluster reference\n1968621 - Paused deployment config causes a migration to hang\n1970338 - Parallel migrations fail because the initial backup is missing\n1974737 - Migration plan name length in the \"Migration plan\" wizard is not validated\n1975369 - \"Debug view\" link text on \"Migration plans\" page can be improved\n1975372 - Destination namespace in MigPlan custom resource is not validated\n1976895 - Namespace mapping cannot be changed using the Migration Plan wizard\n1981810 - \"Excluded\" resources are not excluded from the migration\n1982026 - Direct image migration fails if the source URI contains a double slash (\"//\")\n1994985 - Web console crashes when a MigPlan custom resource is created with an empty namespaces list\n1996169 - When \"None\" is selected as the target storage class in the web console, the setting is ignored and the default storage class is used\n1996627 - MigPlan custom resource displays a \"PvUsageAnalysisFailed\" warning after a successful PVC migration\n1996784 - \"Migration resources\" tree on the \"Migration details\" page is not displayed\n1996902 - \"Select all\" checkbox on the \"Namespaces\" page of the \"Migration plan\" wizard remains selected after a namespace is unselected\n1996904 - \"Migration\" dialogs on the \"Migration plans\" page display inconsistent capitalization\n1996906 - \"Migration details\" page link is displayed for a migration plan with no associated migrations\n1996938 - Search function on \"Migration plans\" page displays no results\n1997051 - Indirect migration from MTC 1.5.1 to 1.6.0 fails during \"StageBackup\" phase\n1997127 - Direct volume migration \"retry\" feature does not work correctly after a network failure\n1997173 - Migration of custom resource definitions to OpenShift Container Platform 4.9 fails because of API version incompatibility\n1997180 - \"migration-log-reader\" pod does not log invalid Rsync options\n1997665 - Selected PVCs in the \"State migration\" dialog are reset because of background polling\n1997694 - \"Update operator\" link on the \"Clusters\" page is incorrect\n1997827 - \"Migration plan\" wizard displays PVC names incorrectly formatted after running state migration\n1998062 - Rsync pod uses upstream image\n1998283 - \"Migration step details\" link on the \"Migrations\" page does not work\n1998550 - \"Migration plan\" wizard does not support certain screen resolutions\n1998581 - \"Migration details\" link on \"Migration plans\" page displays \"latestIsFailed\" error\n1999113 - \"oc describe\" and \"oc log\" commands on \"Migration resources\" tree cannot be copied after failed migration\n1999381 - MigPlan custom resource displays \"Stage completed with warnings\" status after successful migration\n1999528 - Position of the \"Add migration plan\" button is different from the other \"Add\" buttons\n1999765 - \"Migrate\" button on \"State migration\" dialog is enabled when no PVCs are selected\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2000205 - \"Options\" menu on the \"Migration details\" page displays incorrect items\n2000218 - Validation incorrectly blocks namespace mapping if a source cluster namespace is the same as the destination namespace\n2000243 - \"Migration plan\" wizard does not allow a migration within the same cluster\n2000644 - Invalid migration plan causes \"controller\" pod to crash\n2000875 - State migration status on \"Migrations\" page displays \"Stage succeeded\" message\n2000979 - \"clusterIPs\" parameter of \"service\" object can cause Velero errors\n2001089 - Direct volume migration fails because of missing CA path configuration\n2001173 - Migration plan requires two clusters\n2001786 - Migration fails during \"Stage Backup\" step because volume path on host not found\n2001829 - Migration does not complete when the namespace contains a cron job with a PVC\n2001941 - Fixing PVC conflicts in state migration plan using the web console causes the migration to run twice\n2002420 - \"Stage\" pod not created for completed application pod, causing the \"mig-controller\" to stall\n2002608 - Migration of unmounted PVC fails during \"StageBackup\" phase\n2002897 - Rollback migration does not complete when the namespace contains a cron job\n2003603 - \"View logs\" dialog displays the \"--selector\" option, which does not print all logs\n2004601 - Migration plan status on \"Migration plans\" page is \"Ready\" after migration completed with warnings\n2004923 - Web console displays \"New operator version available\" notification for incorrect operator\n2005143 - Combining Rsync and Stunnel in a single pod can degrade performance\n2006316 - Web console cannot create migration plan in a proxy environment\n2007175 - Web console cannot be launched in a proxy environment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nMIG-785 - Search for \"Crane\" in the Operator Hub should display the Migration Toolkit for Containers\n\n6. Description:\n\nThe release of RHACS 3.67 provides the following new features, bug fixes,\nsecurity patches and system changes:\n\nOpenShift Dedicated support\n\nRHACS 3.67 is thoroughly tested and supported on OpenShift Dedicated on\nAmazon Web Services and Google Cloud Platform. Use OpenShift OAuth server as an identity provider\nIf you are using RHACS with OpenShift, you can now configure the built-in\nOpenShift OAuth server as an identity provider for RHACS. Enhancements for CI outputs\nRed Hat has improved the usability of RHACS CI integrations. CI outputs now\nshow additional detailed information about the vulnerabilities and the\nsecurity policies responsible for broken builds. Runtime Class policy criteria\nUsers can now use RHACS to define the container runtime configuration that\nmay be used to run a pod\u2019s containers using the Runtime Class policy\ncriteria. \n\nBug Fixes\nThe release of RHACS 3.67 includes the following bug fixes:\n\n1. Previously, when using RHACS with the Compliance Operator integration,\nRHACS did not respect or populate Compliance Operator TailoredProfiles. \nThis has been fixed. Previously, the Alpine Linux package manager (APK) in Image policy\nlooked for the presence of apk package in the image rather than the\napk-tools package. This issue has been fixed. \n\nSystem changes\nThe release of RHACS 3.67 includes the following system changes:\n\n1. Scanner now identifies vulnerabilities in Ubuntu 21.10 images. The Port exposure method policy criteria now include route as an\nexposure method. The OpenShift: Kubeadmin Secret Accessed security policy now allows the\nOpenShift Compliance Operator to check for the existence of the Kubeadmin\nsecret without creating a violation. The OpenShift Compliance Operator integration now supports using\nTailoredProfiles. The RHACS Jenkins plugin now provides additional security information. When you enable the environment variable ROX_NETWORK_ACCESS_LOG for\nCentral, the logs contain the Request URI and X-Forwarded-For header\nvalues. The default uid:gid pair for the Scanner image is now 65534:65534. RHACS adds a new default Scope Manager role that includes minimum\npermissions to create and modify access scopes. In addition to manually uploading vulnerability definitions in offline\nmode, you can now upload definitions in online mode. You can now format the output of the following roxctl CLI commands in\ntable, csv, or JSON format: image scan, image check \u0026 deployment check\n12. You can now use a regular expression for the deployment name while\nspecifying policy exclusions\n\n3. Solution:\n\nTo take advantage of these new features, fixes and changes, please upgrade\nRed Hat Advanced Cluster Security for Kubernetes to version 3.67. Bugs fixed (https://bugzilla.redhat.com/):\n\n1956818 - CVE-2021-23343 nodejs-path-parse: ReDoS via splitDeviceRe, splitTailRe and splitPathRe\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1992006 - CVE-2021-29923 golang: net: incorrect parsing of extraneous zero characters at the beginning of an IP address octet\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2005445 - CVE-2021-3801 nodejs-prismjs: ReDoS vulnerability\n2006044 - CVE-2021-39293 golang: archive/zip: malformed archive may cause panic or memory exhaustion (incomplete fix of CVE-2021-33196)\n2016640 - CVE-2020-27304 civetweb: directory traversal when using the built-in example HTTP form-based file upload mechanism via the mg_handle_form_request API\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nRHACS-65 - Release RHACS 3.67.0\n\n6. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana\ngement_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes: \n\n* CVE-2021-33623: nodejs-trim-newlines: ReDoS in .end() method\n\n* CVE-2021-32626: redis: Lua scripts can overflow the heap-based Lua stack\n\n* CVE-2021-32627: redis: Integer overflow issue with Streams\n\n* CVE-2021-32628: redis: Integer overflow bug in the ziplist data structure\n\n* CVE-2021-32672: redis: Out of bounds read in lua debugger protocol parser\n\n* CVE-2021-32675: redis: Denial of service via Redis Standard Protocol\n(RESP) request\n\n* CVE-2021-32687: redis: Integer overflow issue with intsets\n\n* CVE-2021-32690: helm: information disclosure vulnerability\n\n* CVE-2021-32803: nodejs-tar: Insufficient symlink protection allowing\narbitrary file creation and overwrite\n\n* CVE-2021-32804: nodejs-tar: Insufficient absolute path sanitization\nallowing arbitrary file creation and overwrite\n\n* CVE-2021-23017: nginx: Off-by-one in ngx_resolver_copy() when labels are\nfollowed by a pointer to a root domain name\n\n* CVE-2021-3711: openssl: SM2 Decryption Buffer Overflow\n\n* CVE-2021-3712: openssl: Read buffer overruns processing ASN.1 strings\n\n* CVE-2021-3749: nodejs-axios: Regular expression denial of service in trim\nfunction\n\n* CVE-2021-41099: redis: Integer overflow issue with strings\n\nBug fixes:\n\n* RFE ACM Application management UI doesn\u0027t reflect object status (Bugzilla\n#1965321)\n\n* RHACM 2.4 files (Bugzilla #1983663)\n\n* Hive Operator CrashLoopBackOff when deploying ACM with latest downstream\n2.4 (Bugzilla #1993366)\n\n* submariner-addon pod failing in RHACM 2.4 latest ds snapshot (Bugzilla\n#1994668)\n\n* ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to\nmulticluster pod in clb (Bugzilla #2000274)\n\n* pre-network-manager-config failed due to timeout when static config is\nused (Bugzilla #2003915)\n\n* InfraEnv condition does not reflect the actual error message (Bugzilla\n#2009204, 2010030)\n\n* Flaky test point to a nil pointer conditions list (Bugzilla #2010175)\n\n* InfraEnv status shows \u0027Failed to create image: internal error (Bugzilla\n#2010272)\n\n* subctl diagnose firewall intra-cluster - failed VXLAN checks (Bugzilla\n#2013157)\n\n* pre-network-manager-config failed due to timeout when static config is\nused (Bugzilla #2014084)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963121 - CVE-2021-23017 nginx: Off-by-one in ngx_resolver_copy() when labels are followed by a pointer to a root domain name\n1965321 - RFE ACM Application management UI doesn\u0027t reflect object status\n1966615 - CVE-2021-33623 nodejs-trim-newlines: ReDoS in .end() method\n1978144 - CVE-2021-32690 helm: information disclosure vulnerability\n1983663 - RHACM 2.4.0 images\n1990409 - CVE-2021-32804 nodejs-tar: Insufficient absolute path sanitization allowing arbitrary file creation and overwrite\n1990415 - CVE-2021-32803 nodejs-tar: Insufficient symlink protection allowing arbitrary file creation and overwrite\n1993366 - Hive Operator CrashLoopBackOff when deploying ACM with latest downstream 2.4\n1994668 - submariner-addon pod failing in RHACM 2.4 latest ds snapshot\n1995623 - CVE-2021-3711 openssl: SM2 Decryption Buffer Overflow\n1995634 - CVE-2021-3712 openssl: Read buffer overruns processing ASN.1 strings\n1999784 - CVE-2021-3749 nodejs-axios: Regular expression denial of service in trim function\n2000274 - ACM 2.4 install on OCP 4.9 ipv6 disconnected hub fails due to multicluster pod in clb\n2003915 - pre-network-manager-config failed due to timeout when static config is used\n2009204 - InfraEnv condition does not reflect the actual error message\n2010030 - InfraEnv condition does not reflect the actual error message\n2010175 - Flaky test point to a nil pointer conditions list\n2010272 - InfraEnv status shows \u0027Failed to create image: internal error\n2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets\n2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request\n2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser\n2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure\n2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams\n2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack\n2011020 - CVE-2021-41099 redis: Integer overflow issue with strings\n2013157 - subctl diagnose firewall intra-cluster - failed VXLAN checks\n2014084 - pre-network-manager-config failed due to timeout when static config is used\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2021-3749" }, { "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "VULMON", "id": "CVE-2021-3749" }, { "db": "PACKETSTORM", "id": "166643" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "164342" }, { "db": "PACKETSTORM", "id": "165129" }, { "db": "PACKETSTORM", "id": "164948" } ], "trust": 2.7 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-3749", "trust": 3.8 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 1.5 }, { "db": "JVN", "id": "JVNVU99475301", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2021-011290", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "166643", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "164342", "trust": 0.7 }, { "db": "CS-HELP", "id": "SB2021041363", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202104-975", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1025", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.4059", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1504", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3247", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2021.3878", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021093012", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021120334", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202108-2780", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2021-3749", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166279", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165129", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164948", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3749" }, { "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "db": "PACKETSTORM", "id": "166643" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "164342" }, { "db": "PACKETSTORM", "id": "165129" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202108-2780" }, { "db": "NVD", "id": "CVE-2021-3749" } ] }, "id": "VAR-202108-1941", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-02-13T00:41:42.791000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Security\u00a0fix\u00a0for\u00a0ReDoS\u00a0(#3980)", "trust": 0.8, "url": "https://github.com/axios/axios/commit/5b457116e31db0e88fede6c428e969e87f290929" }, { "title": "Axios Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=161088" }, { "title": "Red Hat: Important: Red Hat OpenShift Service Mesh 2.0.9 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221276 - security advisory" }, { "title": "Red Hat: Moderate: OpenShift Container Platform 4.10.3 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220056 - security advisory" }, { "title": "node-red-contrib-graphql", "trust": 0.1, "url": "https://github.com/rgstephens/node-red-contrib-graphql " }, { "title": "Axios Regular Expression Denial Of Service Attack", "trust": 0.1, "url": "https://github.com/t-guerrero/axios-redos " }, { "title": "https://github.com/broxus/ton-wallet-crystal-browser-extension", "trust": 0.1, "url": "https://github.com/broxus/ton-wallet-crystal-browser-extension " }, { "title": "geidai-ikoi (\u85dd\u5927\u30aa\u30f3\u30e9\u30a4\u30f3\u61a9\u3044)", "trust": 0.1, "url": "https://github.com/maysomusician/geidai-ikoi " }, { "title": "Seal Security Patches", "trust": 0.1, "url": "https://github.com/seal-community/patches " }, { "title": "PoC in GitHub", "trust": 0.1, "url": "https://github.com/manas3c/cve-poc " } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3749" }, { "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "db": "CNNVD", "id": "CNNVD-202108-2780" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-1333", "trust": 1.0 }, { "problemtype": "Resource exhaustion (CWE-400) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "db": "NVD", "id": "CVE-2021-3749" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.7, "url": "https://github.com/axios/axios/commit/5b457116e31db0e88fede6c428e969e87f290929" }, { "trust": 1.7, "url": "https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.6, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3749" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/r7324ecc35b8027a51cb6ed629490fcd3b2d7cf01c424746ed5744bf1%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/rfc5c478053ff808671aef170f3d9fc9d05cc1fab8fb64431edc66103%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/r216f0fd0a3833856d6a6a1fada488cadba45f447d87010024328ccf2%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/r3ae6d2654f92c5851bdb73b35e96b0e4e3da39f28ac7a1b15ae3aab8%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/ra15d63c54dc6474b29f72ae4324bcb03038758545b3ab800845de7a1%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/r74d0b359408fff31f87445261f0ee13bdfcac7d66f6b8e846face321%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/rc263bfc5b53afcb7e849605478d73f5556eb0c00d1f912084e407289%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/r4bf1b32983f50be00f9752214c1b53738b621be1c2b0dbd68c7f2391%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/r075d464dce95cd13c03ff9384658edcccd5ab2983b82bfc72b62bb10%40%3ccommits.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://lists.apache.org/thread.html/rfa094029c959da0f7c8cd7dc9c4e59d21b03457bf0cedf6c93e1bb0a%40%3cdev.druid.apache.org%3e" }, { "trust": 1.1, "url": "https://access.redhat.com/security/cve/cve-2021-3749" }, { "trust": 0.9, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.8, "url": "http://jvn.jp/vu/jvnvu99475301/index.html" }, { "trust": 0.8, "url": "https://huntr.dev/bounties/1e8f07fc-c384-4ff9-8498-0690de2e8c31/" }, { "trust": 0.8, "url": "https://lists.apache.org/thread/3ss0n5d2mf2k9rvjywnbmmzrjlo6fhyr" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021041363" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/rc263bfc5b53afcb7e849605478d73f5556eb0c00d1f912084e407289@%3ccommits.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/ra15d63c54dc6474b29f72ae4324bcb03038758545b3ab800845de7a1@%3ccommits.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/rfa094029c959da0f7c8cd7dc9c4e59d21b03457bf0cedf6c93e1bb0a@%3cdev.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r7324ecc35b8027a51cb6ed629490fcd3b2d7cf01c424746ed5744bf1@%3ccommits.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r74d0b359408fff31f87445261f0ee13bdfcac7d66f6b8e846face321@%3ccommits.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r4bf1b32983f50be00f9752214c1b53738b621be1c2b0dbd68c7f2391@%3ccommits.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r3ae6d2654f92c5851bdb73b35e96b0e4e3da39f28ac7a1b15ae3aab8@%3ccommits.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r075d464dce95cd13c03ff9384658edcccd5ab2983b82bfc72b62bb10@%3ccommits.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/r216f0fd0a3833856d6a6a1fada488cadba45f447d87010024328ccf2@%3ccommits.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://lists.apache.org/thread.html/rfc5c478053ff808671aef170f3d9fc9d05cc1fab8fb64431edc66103@%3ccommits.druid.apache.org%3e" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164342/red-hat-security-advisory-2021-3694-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166643/red-hat-security-advisory-2022-1276-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1025" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021093012" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3878" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6526104" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.4059" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6514811" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2021.3247" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021120334" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1504" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6516466" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3121" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-29923" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33938" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33930" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33928" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22947" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3733" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-33929" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22946" }, { "trust": 0.2, "url": "https://issues.jboss.org/):" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22924" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22922" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36222" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22923" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32690" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/1333.html" }, { "trust": 0.1, "url": "https://github.com/rgstephens/node-red-contrib-graphql" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21654" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43565" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43825" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1276" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28852" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43826" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3121" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24726" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/service_mesh/v2x/servicemesh-release-notes.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43825" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23635" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23606" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28851" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21654" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24726" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21655" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23635" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29482" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29923" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43826" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29482" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36221" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21655" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28852" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36221" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23606" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28851" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9802" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30762" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8625" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44716" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3450" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8812" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3899" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8819" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3867" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9893" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3902" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24407" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25215" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3900" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30761" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3537" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9805" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8820" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3449" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8813" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9850" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8710" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27781" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8769" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0055" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9803" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9862" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27618" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-25013" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2014-3577" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3885" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15503" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20807" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3326" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10018" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14889" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25660" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8835" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2017-14502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8764" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8844" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.10/release_notes/ocp-4-10-release-notes.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3865" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-1730" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3864" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3520" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-15358" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21684" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14391" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3541" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3862" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:0056" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3901" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39226" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3518" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8808" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13434" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3895" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44717" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-11793" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-1000858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20454" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8720" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8816" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8771" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13050" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9806" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8814" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-25013" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14889" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20843" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8743" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9915" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8815" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8813" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8625" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8783" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-9169" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29362" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-9952" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2016-10228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20305" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29363" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-15903" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8766" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3868" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8846" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-3894" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25677" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-30666" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-8782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3521" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37750" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/migration_toolkit_for_con" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38201" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-38201" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3694" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23343" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27304" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-39293" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4902" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23343" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27304" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3801" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33929" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0512" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32803" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33930" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32626" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32690" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3711" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4618" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32675" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36385" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32675" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32804" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23017" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36385" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41099" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3656" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32804" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32672" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32627" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0512" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32628" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32626" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3711" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32672" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33623" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32687" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23017" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33928" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33938" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32687" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32628" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32803" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-3749" }, { "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "db": "PACKETSTORM", "id": "166643" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "164342" }, { "db": "PACKETSTORM", "id": "165129" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202108-2780" }, { "db": "NVD", "id": "CVE-2021-3749" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2021-3749" }, { "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "db": "PACKETSTORM", "id": "166643" }, { "db": "PACKETSTORM", "id": "166279" }, { "db": "PACKETSTORM", "id": "164342" }, { "db": "PACKETSTORM", "id": "165129" }, { "db": "PACKETSTORM", "id": "164948" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202108-2780" }, { "db": "NVD", "id": "CVE-2021-3749" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-08-31T00:00:00", "db": "VULMON", "id": "CVE-2021-3749" }, { "date": "2022-07-26T00:00:00", "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "date": "2022-04-08T15:05:23", "db": "PACKETSTORM", "id": "166643" }, { "date": "2022-03-11T16:38:38", "db": "PACKETSTORM", "id": "166279" }, { "date": "2021-09-30T16:27:16", "db": "PACKETSTORM", "id": "164342" }, { "date": "2021-12-02T16:06:16", "db": "PACKETSTORM", "id": "165129" }, { "date": "2021-11-12T17:01:04", "db": "PACKETSTORM", "id": "164948" }, { "date": "2021-04-13T00:00:00", "db": "CNNVD", "id": "CNNVD-202104-975" }, { "date": "2021-08-31T00:00:00", "db": "CNNVD", "id": "CNNVD-202108-2780" }, { "date": "2021-08-31T11:15:07.890000", "db": "NVD", "id": "CVE-2021-3749" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-07T00:00:00", "db": "VULMON", "id": "CVE-2021-3749" }, { "date": "2022-09-20T05:49:00", "db": "JVNDB", "id": "JVNDB-2021-011290" }, { "date": "2021-04-14T00:00:00", "db": "CNNVD", "id": "CNNVD-202104-975" }, { "date": "2022-09-19T00:00:00", "db": "CNNVD", "id": "CNNVD-202108-2780" }, { "date": "2023-11-07T03:38:14.020000", "db": "NVD", "id": "CVE-2021-3749" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "165129" }, { "db": "CNNVD", "id": "CNNVD-202108-2780" } ], "trust": 0.7 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "axios\u00a0 Resource exhaustion vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-011290" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202104-975" } ], "trust": 0.6 } }
var-202203-0664
Vulnerability from variot
BIND 9.11.0 -> 9.11.36 9.12.0 -> 9.16.26 9.17.0 -> 9.18.0 BIND Supported Preview Editions: 9.11.4-S1 -> 9.11.36-S1 9.16.8-S1 -> 9.16.26-S1 Versions of BIND 9 earlier than those shown - back to 9.1.0, including Supported Preview Editions - are also believed to be affected but have not been tested as they are EOL. The cache could become poisoned with incorrect records leading to queries being made to the wrong servers, which might also result in false information being returned to clients. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. (CVE-2021-25220) By flooding the target resolver with queries exploiting this flaw an attacker can significantly impair the resolver's performance, effectively denying legitimate clients access to the DNS resolution service. (CVE-2022-2795) By spoofing the target resolver with responses that have a malformed ECDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38177) By spoofing the target resolver with responses that have a malformed EdDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38178). ========================================================================== Ubuntu Security Notice USN-5332-1 March 17, 2022
bind9 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in Bind.
Software Description: - bind9: Internet Domain Name Server
Details:
Xiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind incorrectly handled certain bogus NS records when using forwarders. A remote attacker could possibly use this issue to manipulate cache results. (CVE-2021-25220)
It was discovered that Bind incorrectly handled certain crafted TCP streams. A remote attacker could possibly use this issue to cause Bind to consume resources, leading to a denial of service. This issue only affected Ubuntu 21.10. (CVE-2022-0396)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.10: bind9 1:9.16.15-1ubuntu1.2
Ubuntu 20.04 LTS: bind9 1:9.16.1-0ubuntu2.10
Ubuntu 18.04 LTS: bind9 1:9.11.3+dfsg-1ubuntu1.17
In general, a standard system update will make all the necessary changes.
For the oldstable distribution (buster), this problem has been fixed in version 1:9.11.5.P4+dfsg-5.1+deb10u7.
For the stable distribution (bullseye), this problem has been fixed in version 1:9.16.27-1~deb11u1.
We recommend that you upgrade your bind9 packages.
For the detailed security status of bind9 please refer to its security tracker page at: https://security-tracker.debian.org/tracker/bind9
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8 Tjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9 IgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW iRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G ELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV G/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl MuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8 glQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU rbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W bYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4 +8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8 QygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ= =6st2 -----END PGP SIGNATURE----- . 8) - aarch64, ppc64le, s390x, x86_64
-
8) - aarch64, noarch, ppc64le, s390x, x86_64
-
Gentoo Linux Security Advisory GLSA 202210-25
https://security.gentoo.org/
Severity: Low Title: ISC BIND: Multiple Vulnerabilities Date: October 31, 2022 Bugs: #820563, #835439, #872206 ID: 202210-25
Synopsis
Multiple vulnerabilities have been discovered in ISC BIND, the worst of which could result in denial of service.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-dns/bind < 9.16.33 >= 9.16.33 2 net-dns/bind-tools < 9.16.33 >= 9.16.33
Description
Multiple vulnerabilities have been discovered in ISC BIND. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All ISC BIND users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-9.16.33"
All ISC BIND-tools users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-tools-9.16.33"
References
[ 1 ] CVE-2021-25219 https://nvd.nist.gov/vuln/detail/CVE-2021-25219 [ 2 ] CVE-2021-25220 https://nvd.nist.gov/vuln/detail/CVE-2021-25220 [ 3 ] CVE-2022-0396 https://nvd.nist.gov/vuln/detail/CVE-2022-0396 [ 4 ] CVE-2022-2795 https://nvd.nist.gov/vuln/detail/CVE-2022-2795 [ 5 ] CVE-2022-2881 https://nvd.nist.gov/vuln/detail/CVE-2022-2881 [ 6 ] CVE-2022-2906 https://nvd.nist.gov/vuln/detail/CVE-2022-2906 [ 7 ] CVE-2022-3080 https://nvd.nist.gov/vuln/detail/CVE-2022-3080 [ 8 ] CVE-2022-38177 https://nvd.nist.gov/vuln/detail/CVE-2022-38177 [ 9 ] CVE-2022-38178 https://nvd.nist.gov/vuln/detail/CVE-2022-38178
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-25
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: bind security update Advisory ID: RHSA-2023:0402-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2023:0402 Issue date: 2023-01-24 CVE Names: CVE-2021-25220 CVE-2022-2795 ==================================================================== 1. Summary:
An update for bind is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly.
Security Fix(es):
-
bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)
-
bind: processing large delegations may severely degrade resolver performance (CVE-2022-2795)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
After installing the update, the BIND daemon (named) will be restarted automatically.
- Bugs fixed (https://bugzilla.redhat.com/):
2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability 2128584 - CVE-2022-2795 bind: processing large delegations may severely degrade resolver performance
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
ppc64: bind-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm
ppc64le: bind-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm
s390x: bind-9.11.4-26.P2.el7_9.13.s390x.rpm bind-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.s390.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.s390x.rpm bind-utils-9.11.4-26.P2.el7_9.13.s390x.rpm
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.ppc64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm
ppc64le: bind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-sdb-9.11.4-26.P2.el7_9.13.ppc64le.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm
s390x: bind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm bind-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390x.rpm bind-sdb-9.11.4-26.P2.el7_9.13.s390x.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: bind-9.11.4-26.P2.el7_9.13.src.rpm
noarch: bind-license-9.11.4-26.P2.el7_9.13.noarch.rpm
x86_64: bind-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm bind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: bind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm bind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm bind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm bind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-25220 https://access.redhat.com/security/cve/CVE-2022-2795 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2023 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY9AIs9zjgjWX9erEAQiz9BAAiQvmAQ5DWdOQbHHizPAHBnKnBtNBfCT3 iaAzKQ0Yrpk26N9cdrvcBJwdrHpI28VJ3eemFUxQFseUqtAErsgfL4QqnjPjQgsp U2qLPjqbzfOrbi1CuruMMIIbtxfwvsdic8OB9Zi7XzfZjWm2X4c6Ima+QXol6x9a 8J2qdzCqhoYUXJgdpVK9nAAGsPtidcnqLYYIcTclJArp6uRSlEEk7EbNJvs2SAbj MUo5aq5BoVy2TkiMyqhT5voy6K8f4c7WbQYerNieps18541ZSr29fAzWBznr3Yns gE10Aaoa8uCxlaexFR8EahPVYe6wJAm6R62LBabEWChbzW0oxr7X2DdzX9eiOwl0 wJT0n4GHoFsCGMa+v1yybkjHIUfiW25WC7bC4QDj4fjTpbicVlnttXhQJwCJK5bb PC27GE6qi7EqwHYJa/jPenbIG38mXj/r2bwIr1qYQMLjQ8BQIneShky3ZWE4l/jd zTMwGVal8ACBYdCALx/O9QNyzaO92xHLnKl3DIoqaQdjasIfGp/G6Xc1YggKyZAP VVtXPiOIbReBVNWiBXMH1ZEQeNon4su0/MbMWrmJpwvEzYeXkuWO98LZ4dlLVuim NG/dJ6RqzT6/aqRNVyOt5s4SLIQ5DrPXoPnZRUBsbpWhP6lxPhESKA0TUg5FYz33 eDGIrZR4jEY=azJw -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. The dhcp packages provide a relay agent and ISC DHCP service required to enable and administer DHCP on a network.
The following advisory data is extracted from:
https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_2720.json
Red Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat's archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202203-0664", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "bind", "scope": "gte", "trust": 1.0, "vendor": "isc", "version": "9.16.8" }, { "model": "bind", "scope": "lt", "trust": 1.0, "vendor": "isc", "version": "9.16.27" }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "19.4" }, { "model": "h700e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h500e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "20.3" }, { "model": "h300e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "19.3" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "21.2" }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "20.2" }, { "model": "bind", "scope": "lte", "trust": 1.0, "vendor": "isc", "version": "9.18.0" }, { "model": "bind", "scope": "gte", "trust": 1.0, "vendor": "isc", "version": "9.11.0" }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "junos", "scope": "lt", "trust": 1.0, "vendor": "juniper", "version": "19.3" }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "21.4" }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "21.1" }, { "model": "bind", "scope": "lt", "trust": 1.0, "vendor": "isc", "version": "9.11.37" }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "22.1" }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "bind", "scope": "gte", "trust": 1.0, "vendor": "isc", "version": "9.12.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "21.3" }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "22.2" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "bind", "scope": "gte", "trust": 1.0, "vendor": "isc", "version": "9.17.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "bind", "scope": "gte", "trust": 1.0, "vendor": "isc", "version": "9.11.4" }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "junos", "scope": "eq", "trust": 1.0, "vendor": "juniper", "version": "20.4" }, { "model": "bind", "scope": null, "trust": 0.8, "vendor": "isc", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "esmpro/serveragent", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-001797" }, { "db": "NVD", "id": "CVE-2021-25220" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:supported_preview:*:*:*", "cpe_name": [], "versionEndExcluding": "9.16.27", "versionStartIncluding": "9.16.8", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:supported_preview:*:*:*", "cpe_name": [], "versionEndExcluding": "9.11.37", "versionStartIncluding": "9.11.4", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "9.16.27", "versionStartIncluding": "9.12.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "9.11.37", "versionStartIncluding": "9.11.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "9.18.0", "versionStartIncluding": "9.17.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:juniper:junos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "19.3", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s7:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s5:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s6:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s4:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s5:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r3-s4:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.3:r2-s6:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s6:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s4:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s4:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s5:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1-s4:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s5:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s8:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s6:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r3-s7:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:19.4:r2-s7:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3-s4:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r1-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r2-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r2-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r2-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r1-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.2:r3-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r2-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r1-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3-s4:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.3:r3-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3-s3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3-s4:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r2-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r3-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:20.4:r2-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r2-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r2-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r3-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.1:r3-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r1-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r2-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r2-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r3-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.2:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r1-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r2-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.3:r2-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:r1-s2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:r2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:21.4:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:22.1:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:22.1:r1-s1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:juniper:junos:22.2:r1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:juniper:srx100:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx110:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx1400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx1500:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx210:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx220:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx240:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx240h2:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx240m:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx300:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx320:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx340:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx3400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx345:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx3600:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx380:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx4000:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx4100:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx4200:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx4600:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx5000:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx5400:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx550:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx550_hm:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx550m:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx5600:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx5800:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false }, { "cpe23Uri": "cpe:2.3:h:juniper:srx650:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-25220" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Siemens reported these vulnerabilities to CISA.", "sources": [ { "db": "CNNVD", "id": "CNNVD-202203-1514" } ], "trust": 0.6 }, "cve": "CVE-2021-25220", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "SINGLE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 4.0, "confidentialityImpact": "NONE", "exploitabilityScore": 8.0, "impactScore": 2.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:S/C:N/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 5.0, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-25220", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.8, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:N/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "SINGLE", "author": "VULMON", "availabilityImpact": "NONE", "baseScore": 4.0, "confidentialityImpact": "NONE", "exploitabilityScore": 8.0, "id": "CVE-2021-25220", "impactScore": 2.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "MEDIUM", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:S/C:N/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.8, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.3, "impactScore": 4.0, "integrityImpact": "HIGH", "privilegesRequired": "HIGH", "scope": "CHANGED", "trust": 2.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:N/I:H/A:N", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 8.6, "baseSeverity": "High", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2021-25220", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Changed", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:C/C:N/I:H/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-25220", "trust": 1.0, "value": "MEDIUM" }, { "author": "security-officer@isc.org", "id": "CVE-2021-25220", "trust": 1.0, "value": "MEDIUM" }, { "author": "NVD", "id": "CVE-2021-25220", "trust": 0.8, "value": "High" }, { "author": "CNNVD", "id": "CNNVD-202203-1514", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-25220", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25220" }, { "db": "JVNDB", "id": "JVNDB-2022-001797" }, { "db": "CNNVD", "id": "CNNVD-202203-1514" }, { "db": "NVD", "id": "CVE-2021-25220" }, { "db": "NVD", "id": "CVE-2021-25220" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "BIND 9.11.0 -\u003e 9.11.36 9.12.0 -\u003e 9.16.26 9.17.0 -\u003e 9.18.0 BIND Supported Preview Editions: 9.11.4-S1 -\u003e 9.11.36-S1 9.16.8-S1 -\u003e 9.16.26-S1 Versions of BIND 9 earlier than those shown - back to 9.1.0, including Supported Preview Editions - are also believed to be affected but have not been tested as they are EOL. The cache could become poisoned with incorrect records leading to queries being made to the wrong servers, which might also result in false information being returned to clients. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. (CVE-2021-25220)\nBy flooding the target resolver with queries exploiting this flaw an attacker can significantly impair the resolver\u0027s performance, effectively denying legitimate clients access to the DNS resolution service. (CVE-2022-2795)\nBy spoofing the target resolver with responses that have a malformed ECDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38177)\nBy spoofing the target resolver with responses that have a malformed EdDSA signature, an attacker can trigger a small memory leak. It is possible to gradually erode available memory to the point where named crashes for lack of resources. (CVE-2022-38178). ==========================================================================\nUbuntu Security Notice USN-5332-1\nMarch 17, 2022\n\nbind9 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in Bind. \n\nSoftware Description:\n- bind9: Internet Domain Name Server\n\nDetails:\n\nXiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind\nincorrectly handled certain bogus NS records when using forwarders. A\nremote attacker could possibly use this issue to manipulate cache results. \n(CVE-2021-25220)\n\nIt was discovered that Bind incorrectly handled certain crafted TCP\nstreams. A remote attacker could possibly use this issue to cause Bind to\nconsume resources, leading to a denial of service. This issue only affected\nUbuntu 21.10. (CVE-2022-0396)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.10:\n bind9 1:9.16.15-1ubuntu1.2\n\nUbuntu 20.04 LTS:\n bind9 1:9.16.1-0ubuntu2.10\n\nUbuntu 18.04 LTS:\n bind9 1:9.11.3+dfsg-1ubuntu1.17\n\nIn general, a standard system update will make all the necessary changes. \n\nFor the oldstable distribution (buster), this problem has been fixed\nin version 1:9.11.5.P4+dfsg-5.1+deb10u7. \n\nFor the stable distribution (bullseye), this problem has been fixed in\nversion 1:9.16.27-1~deb11u1. \n\nWe recommend that you upgrade your bind9 packages. \n\nFor the detailed security status of bind9 please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/bind9\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8\nTjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9\nIgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW\niRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G\nELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV\nG/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl\nMuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8\nglQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU\nrbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W\nbYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4\n+8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8\nQygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ=\n=6st2\n-----END PGP SIGNATURE-----\n. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-25\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: ISC BIND: Multiple Vulnerabilities\n Date: October 31, 2022\n Bugs: #820563, #835439, #872206\n ID: 202210-25\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in ISC BIND, the worst of\nwhich could result in denial of service. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-dns/bind \u003c 9.16.33 \u003e= 9.16.33\n 2 net-dns/bind-tools \u003c 9.16.33 \u003e= 9.16.33\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in ISC BIND. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll ISC BIND users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-9.16.33\"\n\nAll ISC BIND-tools users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-tools-9.16.33\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-25219\n https://nvd.nist.gov/vuln/detail/CVE-2021-25219\n[ 2 ] CVE-2021-25220\n https://nvd.nist.gov/vuln/detail/CVE-2021-25220\n[ 3 ] CVE-2022-0396\n https://nvd.nist.gov/vuln/detail/CVE-2022-0396\n[ 4 ] CVE-2022-2795\n https://nvd.nist.gov/vuln/detail/CVE-2022-2795\n[ 5 ] CVE-2022-2881\n https://nvd.nist.gov/vuln/detail/CVE-2022-2881\n[ 6 ] CVE-2022-2906\n https://nvd.nist.gov/vuln/detail/CVE-2022-2906\n[ 7 ] CVE-2022-3080\n https://nvd.nist.gov/vuln/detail/CVE-2022-3080\n[ 8 ] CVE-2022-38177\n https://nvd.nist.gov/vuln/detail/CVE-2022-38177\n[ 9 ] CVE-2022-38178\n https://nvd.nist.gov/vuln/detail/CVE-2022-38178\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-25\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: bind security update\nAdvisory ID: RHSA-2023:0402-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2023:0402\nIssue date: 2023-01-24\nCVE Names: CVE-2021-25220 CVE-2022-2795\n====================================================================\n1. Summary:\n\nAn update for bind is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - noarch, ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nThe Berkeley Internet Name Domain (BIND) is an implementation of the Domain\nName System (DNS) protocols. BIND includes a DNS server (named); a resolver\nlibrary (routines for applications to use when interfacing with DNS); and\ntools for verifying that the DNS server is operating correctly. \n\nSecurity Fix(es):\n\n* bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)\n\n* bind: processing large delegations may severely degrade resolver\nperformance (CVE-2022-2795)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nAfter installing the update, the BIND daemon (named) will be restarted\nautomatically. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability\n2128584 - CVE-2022-2795 bind: processing large delegations may severely degrade resolver performance\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nppc64:\nbind-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.ppc64.rpm\n\nppc64le:\nbind-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.ppc64le.rpm\n\ns390x:\nbind-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.s390x.rpm\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.ppc64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64.rpm\n\nppc64le:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.ppc64le.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.ppc64le.rpm\n\ns390x:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.s390x.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.s390x.rpm\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nbind-9.11.4-26.P2.el7_9.13.src.rpm\n\nnoarch:\nbind-license-9.11.4-26.P2.el7_9.13.noarch.rpm\n\nx86_64:\nbind-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-libs-lite-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-libs-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-utils-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nbind-debuginfo-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-debuginfo-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-export-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-lite-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.i686.rpm\nbind-pkcs11-devel-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-9.11.4-26.P2.el7_9.13.x86_64.rpm\nbind-sdb-chroot-9.11.4-26.P2.el7_9.13.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-25220\nhttps://access.redhat.com/security/cve/CVE-2022-2795\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2023 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY9AIs9zjgjWX9erEAQiz9BAAiQvmAQ5DWdOQbHHizPAHBnKnBtNBfCT3\niaAzKQ0Yrpk26N9cdrvcBJwdrHpI28VJ3eemFUxQFseUqtAErsgfL4QqnjPjQgsp\nU2qLPjqbzfOrbi1CuruMMIIbtxfwvsdic8OB9Zi7XzfZjWm2X4c6Ima+QXol6x9a\n8J2qdzCqhoYUXJgdpVK9nAAGsPtidcnqLYYIcTclJArp6uRSlEEk7EbNJvs2SAbj\nMUo5aq5BoVy2TkiMyqhT5voy6K8f4c7WbQYerNieps18541ZSr29fAzWBznr3Yns\ngE10Aaoa8uCxlaexFR8EahPVYe6wJAm6R62LBabEWChbzW0oxr7X2DdzX9eiOwl0\nwJT0n4GHoFsCGMa+v1yybkjHIUfiW25WC7bC4QDj4fjTpbicVlnttXhQJwCJK5bb\nPC27GE6qi7EqwHYJa/jPenbIG38mXj/r2bwIr1qYQMLjQ8BQIneShky3ZWE4l/jd\nzTMwGVal8ACBYdCALx/O9QNyzaO92xHLnKl3DIoqaQdjasIfGp/G6Xc1YggKyZAP\nVVtXPiOIbReBVNWiBXMH1ZEQeNon4su0/MbMWrmJpwvEzYeXkuWO98LZ4dlLVuim\nNG/dJ6RqzT6/aqRNVyOt5s4SLIQ5DrPXoPnZRUBsbpWhP6lxPhESKA0TUg5FYz33\neDGIrZR4jEY=azJw\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nThe Dynamic Host Configuration Protocol (DHCP) is a protocol that allows\nindividual devices on an IP network to get their own network configuration\ninformation, including an IP address, a subnet mask, and a broadcast\naddress. The dhcp packages provide a relay agent and ISC DHCP service\nrequired to enable and administer DHCP on a network. \n\nThe following advisory data is extracted from:\n\nhttps://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_2720.json\n\nRed Hat officially shut down their mailing list notifications October 10, 2023. Due to this, Packet Storm has recreated the below data as a reference point to raise awareness. It must be noted that due to an inability to easily track revision updates without crawling Red Hat\u0027s archive, these advisories are single notifications and we strongly suggest that you visit the Red Hat provided links to ensure you have the latest information available if the subject matter listed pertains to your environment", "sources": [ { "db": "NVD", "id": "CVE-2021-25220" }, { "db": "JVNDB", "id": "JVNDB-2022-001797" }, { "db": "VULMON", "id": "CVE-2021-25220" }, { "db": "PACKETSTORM", "id": "166356" }, { "db": "PACKETSTORM", "id": "166354" }, { "db": "PACKETSTORM", "id": "169261" }, { "db": "PACKETSTORM", "id": "169745" }, { "db": "PACKETSTORM", "id": "169773" }, { "db": "PACKETSTORM", "id": "169587" }, { "db": "PACKETSTORM", "id": "170724" }, { "db": "PACKETSTORM", "id": "169846" }, { "db": "PACKETSTORM", "id": "178475" } ], "trust": 2.52 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-25220", "trust": 4.2 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 1.5 }, { "db": "JVN", "id": "JVNVU99475301", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU98927070", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-001797", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "166356", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169773", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169587", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "170724", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169846", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2022.1150", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5750", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1223", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1289", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2694", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1183", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1160", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022032124", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031701", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031728", "trust": 0.6 }, { "db": "PACKETSTORM", "id": "169894", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202203-1514", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2021-25220", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166354", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169261", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169745", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "178475", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25220" }, { "db": "JVNDB", "id": "JVNDB-2022-001797" }, { "db": "PACKETSTORM", "id": "166356" }, { "db": "PACKETSTORM", "id": "166354" }, { "db": "PACKETSTORM", "id": "169261" }, { "db": "PACKETSTORM", "id": "169745" }, { "db": "PACKETSTORM", "id": "169773" }, { "db": "PACKETSTORM", "id": "169587" }, { "db": "PACKETSTORM", "id": "170724" }, { "db": "PACKETSTORM", "id": "169846" }, { "db": "PACKETSTORM", "id": "178475" }, { "db": "CNNVD", "id": "CNNVD-202203-1514" }, { "db": "NVD", "id": "CVE-2021-25220" } ] }, "id": "VAR-202203-0664", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-07-23T21:44:12.287000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "NV22-009", "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/" }, { "title": "Ubuntu Security Notice: USN-5332-2: Bind vulnerability", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5332-2" }, { "title": "Red Hat: Moderate: dhcp security and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228385 - security advisory" }, { "title": "Red Hat: Moderate: bind security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227790 - security advisory" }, { "title": "Ubuntu Security Notice: USN-5332-1: Bind vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5332-1" }, { "title": "Red Hat: Moderate: bind security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228068 - security advisory" }, { "title": "Red Hat: Moderate: bind security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230402 - security advisory" }, { "title": "Debian Security Advisories: DSA-5105-1 bind9 -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=16d84b908a424f50b3236db9219500e3" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-25220" }, { "title": "Amazon Linux 2: ALAS2-2023-2001", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2\u0026qid=alas2-2023-2001" }, { "title": "Amazon Linux 2022: ALAS2022-2022-166", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-166" }, { "title": "Amazon Linux 2022: ALAS2022-2022-138", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-138" }, { "title": "", "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2021-25220 " }, { "title": "", "trust": 0.1, "url": "https://github.com/vincent-deng/veracode-container-security-finding-parser " } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25220" }, { "db": "JVNDB", "id": "JVNDB-2022-001797" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-444", "trust": 1.0 }, { "problemtype": "Lack of information (CWE-noinfo) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-001797" }, { "db": "NVD", "id": "CVE-2021-25220" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://kb.isc.org/v1/docs/cve-2021-25220" }, { "trust": 1.8, "url": "https://security.gentoo.org/glsa/202210-25" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20220408-0001/" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25220" }, { "trust": 1.6, "url": "https://supportportal.juniper.net/s/article/2022-10-security-bulletin-junos-os-srx-series-cache-poisoning-vulnerability-in-bind-used-by-dns-proxy-cve-2021-25220?language=en_us" }, { "trust": 1.0, "url": "https://access.redhat.com/security/cve/cve-2021-25220" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/2sxt7247qtknbq67mnrgzd23adxu6e5u/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/5vx3i2u3icoiei5y7oya6cholfmnh3yq/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/de3uavcpumakg27zl5yxsp2c3riow3jz/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/" }, { "trust": 0.9, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.8, "url": "http://jvn.jp/vu/jvnvu98927070/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu99475301/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/api7u5e7sx7baavfnw366ffjgd6nzzkv/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/5vx3i2u3icoiei5y7oya6cholfmnh3yq/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2sxt7247qtknbq67mnrgzd23adxu6e5u/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/de3uavcpumakg27zl5yxsp2c3riow3jz/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169846/red-hat-security-advisory-2022-8385-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1223" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1289" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/isc-bind-spoofing-via-dns-forwarders-cache-poisoning-37754" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169894/red-hat-security-advisory-2022-8068-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031728" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166356/ubuntu-security-notice-usn-5332-2.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1150" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1183" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1160" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169773/red-hat-security-advisory-2022-7643-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170724/red-hat-security-advisory-2023-0402-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169587/gentoo-linux-security-advisory-202210-25.html" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2021-25220/" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5750" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031701" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2694" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022032124" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0396" }, { "trust": 0.4, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.4, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.4, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.4, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.2, "url": "https://ubuntu.com/security/notices/usn-5332-2" }, { "trust": 0.2, "url": "https://ubuntu.com/security/notices/usn-5332-1" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.7_release_notes/index" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2795" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/444.html" }, { "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2021-25220" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/al2/alas-2023-2001.html" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.1-0ubuntu2.10" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.15-1ubuntu1.2" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.11.3+dfsg-1ubuntu1.17" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/bind9" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:7790" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0396" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:7643" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38178" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2906" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2881" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3080" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38177" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0402" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2795" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8385" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2024:2720" }, { "trust": 0.1, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2128584" }, { "trust": 0.1, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2263896" }, { "trust": 0.1, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2263917" }, { "trust": 0.1, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2064512" }, { "trust": 0.1, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2164032" }, { "trust": 0.1, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2263914" }, { "trust": 0.1, "url": "https://access.redhat.com/security/data/csaf/v2/advisories/2024/rhsa-2024_2720.json" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-25220" }, { "db": "JVNDB", "id": "JVNDB-2022-001797" }, { "db": "PACKETSTORM", "id": "166356" }, { "db": "PACKETSTORM", "id": "166354" }, { "db": "PACKETSTORM", "id": "169261" }, { "db": "PACKETSTORM", "id": "169745" }, { "db": "PACKETSTORM", "id": "169773" }, { "db": "PACKETSTORM", "id": "169587" }, { "db": "PACKETSTORM", "id": "170724" }, { "db": "PACKETSTORM", "id": "169846" }, { "db": "PACKETSTORM", "id": "178475" }, { "db": "CNNVD", "id": "CNNVD-202203-1514" }, { "db": "NVD", "id": "CVE-2021-25220" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2021-25220" }, { "db": "JVNDB", "id": "JVNDB-2022-001797" }, { "db": "PACKETSTORM", "id": "166356" }, { "db": "PACKETSTORM", "id": "166354" }, { "db": "PACKETSTORM", "id": "169261" }, { "db": "PACKETSTORM", "id": "169745" }, { "db": "PACKETSTORM", "id": "169773" }, { "db": "PACKETSTORM", "id": "169587" }, { "db": "PACKETSTORM", "id": "170724" }, { "db": "PACKETSTORM", "id": "169846" }, { "db": "PACKETSTORM", "id": "178475" }, { "db": "CNNVD", "id": "CNNVD-202203-1514" }, { "db": "NVD", "id": "CVE-2021-25220" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-03-23T00:00:00", "db": "VULMON", "id": "CVE-2021-25220" }, { "date": "2022-05-12T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-001797" }, { "date": "2022-03-17T15:54:34", "db": "PACKETSTORM", "id": "166356" }, { "date": "2022-03-17T15:54:20", "db": "PACKETSTORM", "id": "166354" }, { "date": "2022-03-28T19:12:00", "db": "PACKETSTORM", "id": "169261" }, { "date": "2022-11-08T13:44:36", "db": "PACKETSTORM", "id": "169745" }, { "date": "2022-11-08T13:49:24", "db": "PACKETSTORM", "id": "169773" }, { "date": "2022-10-31T14:50:53", "db": "PACKETSTORM", "id": "169587" }, { "date": "2023-01-25T16:07:50", "db": "PACKETSTORM", "id": "170724" }, { "date": "2022-11-15T16:40:52", "db": "PACKETSTORM", "id": "169846" }, { "date": "2024-05-09T15:16:06", "db": "PACKETSTORM", "id": "178475" }, { "date": "2022-03-09T00:00:00", "db": "CNNVD", "id": "CNNVD-202203-1514" }, { "date": "2022-03-23T13:15:07.680000", "db": "NVD", "id": "CVE-2021-25220" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-11-28T00:00:00", "db": "VULMON", "id": "CVE-2021-25220" }, { "date": "2022-09-20T06:12:00", "db": "JVNDB", "id": "JVNDB-2022-001797" }, { "date": "2023-07-24T00:00:00", "db": "CNNVD", "id": "CNNVD-202203-1514" }, { "date": "2023-11-09T14:44:33.733000", "db": "NVD", "id": "CVE-2021-25220" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "166356" }, { "db": "PACKETSTORM", "id": "166354" }, { "db": "CNNVD", "id": "CNNVD-202203-1514" } ], "trust": 0.8 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "BIND\u00a0 Cache Pollution with Incorrect Records Vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-001797" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "environmental issue", "sources": [ { "db": "CNNVD", "id": "CNNVD-202203-1514" } ], "trust": 0.6 } }
var-202009-0304
Vulnerability from variot
This vulnerability allows an attacker to use the internal WebSockets API for CodeMeter (All versions prior to 7.00 are affected, including Version 7.0 or newer with the affected WebSockets API still enabled. This is especially relevant for systems or devices where a web browser is used to access a web server) via a specifically crafted Java Script payload, which may allow alteration or creation of license files for when combined with CVE-2020-14515. CodeMeter Exists in a vulnerability related to same-origin policy violations.Information may be tampered with. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool.
Many Siemens products have security vulnerabilities. Attackers can use vulnerabilities to change or create license files
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202009-0304", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "codemeter", "scope": "lt", "trust": 1.0, "vendor": "wibu", "version": "7.00" }, { "model": "codemeter", "scope": null, "trust": 0.8, "vendor": "wibu", "version": null }, { "model": "codemeter", "scope": "eq", "trust": 0.8, "vendor": "wibu", "version": "7.00" }, { "model": "codemeter", "scope": "eq", "trust": 0.8, "vendor": "wibu", "version": null }, { "model": "sinec ins", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "sinema remote connect", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51241" }, { "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "db": "NVD", "id": "CVE-2020-14519" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:wibu:codemeter:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "7.00", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-14519" } ] }, "cve": "CVE-2020-14519", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.0, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "impactScore": 2.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:N/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 5.0, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-14519", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.8, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:N/I:P/A:N", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "CNVD", "availabilityImpact": "COMPLETE", "baseScore": 9.4, "confidentialityImpact": "NONE", "exploitabilityScore": 10.0, "id": "CNVD-2020-51241", "impactScore": 9.2, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 0.6, "vectorString": "AV:N/AC:L/Au:N/C:N/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 7.5, "baseSeverity": "High", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2020-14519", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-14519", "trust": 1.8, "value": "HIGH" }, { "author": "CNVD", "id": "CNVD-2020-51241", "trust": 0.6, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202009-486", "trust": 0.6, "value": "HIGH" } ] } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51241" }, { "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "db": "NVD", "id": "CVE-2020-14519" }, { "db": "CNNVD", "id": "CNNVD-202009-486" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "This vulnerability allows an attacker to use the internal WebSockets API for CodeMeter (All versions prior to 7.00 are affected, including Version 7.0 or newer with the affected WebSockets API still enabled. This is especially relevant for systems or devices where a web browser is used to access a web server) via a specifically crafted Java Script payload, which may allow alteration or creation of license files for when combined with CVE-2020-14515. CodeMeter Exists in a vulnerability related to same-origin policy violations.Information may be tampered with. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. \n\r\n\r\nMany Siemens products have security vulnerabilities. Attackers can use vulnerabilities to change or create license files", "sources": [ { "db": "NVD", "id": "CVE-2020-14519" }, { "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "db": "CNVD", "id": "CNVD-2020-51241" } ], "trust": 2.16 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-14519", "trust": 3.8 }, { "db": "ICS CERT", "id": "ICSA-20-203-01", "trust": 2.4 }, { "db": "JVN", "id": "JVNVU90770748", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU94568336", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-011223", "trust": 0.8 }, { "db": "SIEMENS", "id": "SSA-455843", "trust": 0.6 }, { "db": "CNVD", "id": "CNVD-2020-51241", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076.3", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022021806", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202009-486", "trust": 0.6 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51241" }, { "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "db": "NVD", "id": "CVE-2020-14519" }, { "db": "CNNVD", "id": "CNNVD-202009-486" } ] }, "id": "VAR-202009-0304", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "CNVD", "id": "CNVD-2020-51241" } ], "trust": 1.06346013 }, "iot_taxonomy": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "category": [ "ICS" ], "sub_category": null, "trust": 0.6 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51241" } ] }, "last_update_date": "2023-12-18T10:56:21.427000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "CodeMeter", "trust": 0.8, "url": "https://www.wibu.com/products/codemeter.html" }, { "title": "Patch for Multiple Siemens products verification error vulnerabilities", "trust": 0.6, "url": "https://www.cnvd.org.cn/patchinfo/show/233347" }, { "title": "Wibu-Systems AG CodeMeter Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=127907" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51241" }, { "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "db": "CNNVD", "id": "CNNVD-202009-486" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-346", "trust": 1.0 }, { "problemtype": "Same-origin policy violation (CWE-346) [ Other ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "db": "NVD", "id": "CVE-2020-14519" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.4, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-20-203-01" }, { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14519" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu94568336/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90770748/" }, { "trust": 0.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-455843.pdf" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/siemens-simatic-six-vulnerabilities-via-wibu-systems-codemeter-runtime-33282" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022021806" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.3/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076/" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51241" }, { "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "db": "NVD", "id": "CVE-2020-14519" }, { "db": "CNNVD", "id": "CNNVD-202009-486" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "CNVD", "id": "CNVD-2020-51241" }, { "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "db": "NVD", "id": "CVE-2020-14519" }, { "db": "CNNVD", "id": "CNNVD-202009-486" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-09-10T00:00:00", "db": "CNVD", "id": "CNVD-2020-51241" }, { "date": "2021-03-24T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "date": "2020-09-16T20:15:13.723000", "db": "NVD", "id": "CVE-2020-14519" }, { "date": "2020-09-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202009-486" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-09-10T00:00:00", "db": "CNVD", "id": "CNVD-2020-51241" }, { "date": "2022-03-15T05:12:00", "db": "JVNDB", "id": "JVNDB-2020-011223" }, { "date": "2020-09-22T18:07:41.903000", "db": "NVD", "id": "CVE-2020-14519" }, { "date": "2022-02-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202009-486" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202009-486" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "CodeMeter\u00a0 Vulnerability regarding same-origin policy violation in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011223" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "access control error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202009-486" } ], "trust": 0.6 } }
var-202301-0545
Vulnerability from variot
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially read and write arbitrary files from and to the device's file system. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202301-0545", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": "1.0 sp2 update 1" }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001808" }, { "db": "NVD", "id": "CVE-2022-45092" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-45092" } ] }, "cve": "CVE-2022-45092", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 8.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.8, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "productcert@siemens.com", "availabilityImpact": "HIGH", "baseScore": 9.9, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.1, "impactScore": 6.0, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "CHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 8.8, "baseSeverity": "High", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2022-45092", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "Low", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-45092", "trust": 1.8, "value": "HIGH" }, { "author": "productcert@siemens.com", "id": "CVE-2022-45092", "trust": 1.0, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202301-654", "trust": 0.6, "value": "HIGH" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001808" }, { "db": "NVD", "id": "CVE-2022-45092" }, { "db": "NVD", "id": "CVE-2022-45092" }, { "db": "CNNVD", "id": "CNNVD-202301-654" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially read and write arbitrary files from and to the device\u0027s file system. An attacker might leverage this to trigger remote code execution on the affected component. SINEC INS Exists in a past traversal vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state", "sources": [ { "db": "NVD", "id": "CVE-2022-45092" }, { "db": "JVNDB", "id": "JVNDB-2023-001808" } ], "trust": 1.62 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-45092", "trust": 3.2 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 1.6 }, { "db": "ICS CERT", "id": "ICSA-23-017-03", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU90782730", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2023-001808", "trust": 0.8 }, { "db": "CNNVD", "id": "CNNVD-202301-654", "trust": 0.6 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001808" }, { "db": "NVD", "id": "CVE-2022-45092" }, { "db": "CNNVD", "id": "CNNVD-202301-654" } ] }, "id": "VAR-202301-0545", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T10:52:50.593000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "SSA-332410", "trust": 0.8, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "title": "Siemens SINEC NMS Repair measures for path traversal vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=221640" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001808" }, { "db": "CNNVD", "id": "CNNVD-202301-654" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-22", "trust": 1.0 }, { "problemtype": "Path traversal (CWE-22) [ others ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001808" }, { "db": "NVD", "id": "CVE-2022-45092" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90782730/index.html" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-45092" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-45092/" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001808" }, { "db": "NVD", "id": "CVE-2022-45092" }, { "db": "CNNVD", "id": "CNNVD-202301-654" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "JVNDB", "id": "JVNDB-2023-001808" }, { "db": "NVD", "id": "CVE-2022-45092" }, { "db": "CNNVD", "id": "CNNVD-202301-654" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-05-16T00:00:00", "db": "JVNDB", "id": "JVNDB-2023-001808" }, { "date": "2023-01-10T12:15:23.453000", "db": "NVD", "id": "CVE-2022-45092" }, { "date": "2023-01-10T00:00:00", "db": "CNNVD", "id": "CNNVD-202301-654" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-05-16T03:29:00", "db": "JVNDB", "id": "JVNDB-2023-001808" }, { "date": "2023-01-14T00:47:06.117000", "db": "NVD", "id": "CVE-2022-45092" }, { "date": "2023-01-16T00:00:00", "db": "CNNVD", "id": "CNNVD-202301-654" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202301-654" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "SINEC\u00a0INS\u00a0 Past traversal vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-001808" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "path traversal", "sources": [ { "db": "CNNVD", "id": "CNNVD-202301-654" } ], "trust": 0.6 } }
var-202309-0672
Vulnerability from variot
A heap buffer overflow vulnerability in Wibu CodeMeter Runtime network service up to version 7.60b allows an unauthenticated, remote attacker to achieve RCE and gain full access of the host system. Wibu-Systems AG of CodeMeter Runtime Products from multiple vendors, such as the following, contain out-of-bounds write vulnerabilities.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. PSS(R)CAPE is a transmission and distribution network protection simulation software. PSS(R)E is a power system simulation and analysis tool for transmission operation and planning. PSS(R)ODMS is a CIM-based network model management tool with network analysis capabilities for planning and operational planning of transmission utilities. SIMATIC PCS neo is a distributed control system (DCS). SIMATIC WinCC Open Architecture (OA) is part of the SIMATIC HMI family. It is designed for applications requiring a high degree of customer-specific adaptability, large or complex applications, and projects that impose specific system requirements or functionality. SIMIT Simulation Platform allows simulating factory settings to predict failures at an early planning stage. SINEC INS (Infrastructure Network Services) is a web-based application that combines various network services in one tool. SINEMA Remote Connect is a management platform for remote networks that allows simple management of tunnel connections (VPN) between headquarters, service technicians and installed machines or plants.
Siemens Industrial product WIBU system CodeMeter has a heap buffer overflow vulnerability, which is caused by failure to perform correct boundary checks. An attacker could exploit this vulnerability to cause a buffer overflow and execute arbitrary code on the system
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202309-0672", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "tubedesign", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "08.00" }, { "model": "activation wizard", "scope": "lte", "trust": 1.0, "vendor": "phoenixcontact", "version": "1.6" }, { "model": "fl network manager", "scope": "lte", "trust": 1.0, "vendor": "phoenixcontact", "version": "7.0" }, { "model": "trutops mark 3d", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "06.01" }, { "model": "trutopsprint", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "01.00" }, { "model": "trutopsboost", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "06.00.23.00" }, { "model": "trutopsfab", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "15.00.23.00" }, { "model": "tops unfold", "scope": "eq", "trust": 1.0, "vendor": "trumpf", "version": "05.03.00.00" }, { "model": "teczonebend", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "18.02.r8" }, { "model": "iol-conf", "scope": "lte", "trust": 1.0, "vendor": "phoenixcontact", "version": "1.7.0" }, { "model": "trumpflicenseexpert", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "1.11.1" }, { "model": "programmingtube", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "1.0.1" }, { "model": "programmingtube", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "4.6.3" }, { "model": "trutops mark 3d", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "01.00" }, { "model": "trutopsfab storage smallstore", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "14.06.20" }, { "model": "trutopsweld", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "9.0.28148.1" }, { "model": "trutops cell sw48", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "01.00" }, { "model": "trutopsfab storage smallstore", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "20.04.20.00" }, { "model": "module type package designer", "scope": "eq", "trust": 1.0, "vendor": "phoenixcontact", "version": "1.2.0" }, { "model": "trutopsboost", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "16.0.22" }, { "model": "e-mobility charging suite", "scope": "lte", "trust": 1.0, "vendor": "phoenixcontact", "version": "1.7.0" }, { "model": "module type package designer", "scope": "lt", "trust": 1.0, "vendor": "phoenixcontact", "version": "1.2.0" }, { "model": "trutopsfab", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "22.8.25" }, { "model": "trutops cell sw48", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "02.26.0" }, { "model": "trutops cell classic", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "09.09.02" }, { "model": "oseon", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "3.0.22" }, { "model": "tubedesign", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "14.06.150" }, { "model": "trutopsweld", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "7.0.198.241" }, { "model": "trumpflicenseexpert", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "1.5.2" }, { "model": "trutops", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "08.00" }, { "model": "topscalculation", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "22.00.00" }, { "model": "trutops", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "12.01.00.00" }, { "model": "trutopsprint", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "00.06.00" }, { "model": "codemeter runtime", "scope": "lt", "trust": 1.0, "vendor": "wibu", "version": "7.60c" }, { "model": "topscalculation", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "14.00" }, { "model": "trutopsprintmultilaserassistant", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "01.02" }, { "model": "plcnext engineer", "scope": "lte", "trust": 1.0, "vendor": "phoenixcontact", "version": "2023.6" }, { "model": "oseon", "scope": "gte", "trust": 1.0, "vendor": "trumpf", "version": "1.0.0" }, { "model": "teczonebend", "scope": "lte", "trust": 1.0, "vendor": "trumpf", "version": "23.06.01" }, { "model": "trutopsweld", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "programmingtube", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "codemeter runtime", "scope": null, "trust": 0.8, "vendor": "wibu", "version": null }, { "model": "trutopsboost", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "trutopsprintmultilaserassistant", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "trutopsprint", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "oseon", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "trutops cell sw48", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "trutopsfab", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "tops unfold", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "trutops mark 3d", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "trutopsfab storage smallstore", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "tubedesign", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "trutops", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "trumpflicenseexpert", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "topscalculation", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "teczonebend", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "trutops cell classic", "scope": null, "trust": 0.8, "vendor": "trumpf", "version": null }, { "model": "sinec ins", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "simit simulation platform", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "sinema remote connect", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "simatic wincc oa", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v3.17" }, { "model": "simatic wincc oa", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v3.18" }, { "model": "pss cape", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v14\u003cv14.2023-08-23" }, { "model": "pss cape", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v15\u003cv15.0.22" }, { "model": "pss e", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v34\u003cv34.9.6" }, { "model": "pss odms", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v13.0" }, { "model": "pss odms", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v13.1\u003cv13.1.12.1" }, { "model": "simatic pcs neo", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v3" }, { "model": "simatic pcs neo", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v4" }, { "model": "simatic wincc oa p006", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v3.19\u003cv3.19" }, { "model": "pss e", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "v35" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2023-69811" }, { "db": "JVNDB", "id": "JVNDB-2023-012536" }, { "db": "NVD", "id": "CVE-2023-3935" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:wibu:codemeter_runtime:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "7.60c", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:trumpf:tubedesign:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "14.06.150", "versionStartIncluding": "08.00", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutopsweld:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "9.0.28148.1", "versionStartIncluding": "7.0.198.241", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutopsprintmultilaserassistant:*:*:*:*:*:*:*:*", "cpe_name": [], "versionStartIncluding": "01.02", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutopsprint:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "01.00", "versionStartIncluding": "00.06.00", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutops_mark_3d:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "06.01", "versionStartIncluding": "01.00", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutopsfab_storage_smallstore:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "20.04.20.00", "versionStartIncluding": "14.06.20", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutopsfab:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "22.8.25", "versionStartIncluding": "15.00.23.00", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutops_cell_sw48:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "02.26.0", "versionStartIncluding": "01.00", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutops_cell_classic:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "09.09.02", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutopsboost:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "16.0.22", "versionStartIncluding": "06.00.23.00", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trutops:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "12.01.00.00", "versionStartIncluding": "08.00", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:trumpflicenseexpert:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "1.11.1", "versionStartIncluding": "1.5.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:topscalculation:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "22.00.00", "versionStartIncluding": "14.00", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:teczonebend:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "23.06.01", "versionStartIncluding": "18.02.r8", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:tops_unfold:05.03.00.00:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:programmingtube:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "4.6.3", "versionStartIncluding": "1.0.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:trumpf:oseon:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "3.0.22", "versionStartIncluding": "1.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:phoenixcontact:module_type_package_designer:1.2.0:beta:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:phoenixcontact:module_type_package_designer:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.2.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:phoenixcontact:activation_wizard:*:*:*:*:*:moryx:*:*", "cpe_name": [], "versionEndIncluding": "1.6", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:phoenixcontact:plcnext_engineer:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "2023.6", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:phoenixcontact:iol-conf:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "1.7.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:phoenixcontact:fl_network_manager:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "7.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:phoenixcontact:e-mobility_charging_suite:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "1.7.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2023-3935" } ] }, "cve": "CVE-2023-3935", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "accessComplexity": "HIGH", "accessVector": "NETWORK", "authentication": "NONE", "author": "CNVD", "availabilityImpact": "COMPLETE", "baseScore": 7.6, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 4.9, "id": "CNVD-2023-69811", "impactScore": 10.0, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 0.6, "vectorString": "AV:N/AC:H/Au:N/C:C/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "info@cert.vde.com", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 2.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "OTHER", "availabilityImpact": "High", "baseScore": 9.8, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "JVNDB-2023-012536", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "info@cert.vde.com", "id": "CVE-2023-3935", "trust": 1.0, "value": "CRITICAL" }, { "author": "NVD", "id": "CVE-2023-3935", "trust": 1.0, "value": "CRITICAL" }, { "author": "OTHER", "id": "JVNDB-2023-012536", "trust": 0.8, "value": "Critical" }, { "author": "CNVD", "id": "CNVD-2023-69811", "trust": 0.6, "value": "HIGH" } ] } ], "sources": [ { "db": "CNVD", "id": "CNVD-2023-69811" }, { "db": "JVNDB", "id": "JVNDB-2023-012536" }, { "db": "NVD", "id": "CVE-2023-3935" }, { "db": "NVD", "id": "CVE-2023-3935" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A heap buffer overflow vulnerability in Wibu CodeMeter Runtime network service up to version 7.60b allows an unauthenticated, remote attacker to achieve RCE and gain full access of the host system. Wibu-Systems AG of CodeMeter Runtime Products from multiple vendors, such as the following, contain out-of-bounds write vulnerabilities.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. PSS(R)CAPE is a transmission and distribution network protection simulation software. PSS(R)E is a power system simulation and analysis tool for transmission operation and planning. PSS(R)ODMS is a CIM-based network model management tool with network analysis capabilities for planning and operational planning of transmission utilities. SIMATIC PCS neo is a distributed control system (DCS). SIMATIC WinCC Open Architecture (OA) is part of the SIMATIC HMI family. It is designed for applications requiring a high degree of customer-specific adaptability, large or complex applications, and projects that impose specific system requirements or functionality. SIMIT Simulation Platform allows simulating factory settings to predict failures at an early planning stage. SINEC INS (Infrastructure Network Services) is a web-based application that combines various network services in one tool. SINEMA Remote Connect is a management platform for remote networks that allows simple management of tunnel connections (VPN) between headquarters, service technicians and installed machines or plants. \n\r\n\r\nSiemens Industrial product WIBU system CodeMeter has a heap buffer overflow vulnerability, which is caused by failure to perform correct boundary checks. An attacker could exploit this vulnerability to cause a buffer overflow and execute arbitrary code on the system", "sources": [ { "db": "NVD", "id": "CVE-2023-3935" }, { "db": "JVNDB", "id": "JVNDB-2023-012536" }, { "db": "CNVD", "id": "CNVD-2023-69811" }, { "db": "VULMON", "id": "CVE-2023-3935" } ], "trust": 2.25 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2023-3935", "trust": 3.3 }, { "db": "CERT@VDE", "id": "VDE-2023-031", "trust": 1.9 }, { "db": "CERT@VDE", "id": "VDE-2023-030", "trust": 1.8 }, { "db": "JVN", "id": "JVNVU92598492", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU92008538", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU98137233", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-24-004-01", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-23-320-03", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-23-257-06", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2023-012536", "trust": 0.8 }, { "db": "SIEMENS", "id": "SSA-240541", "trust": 0.6 }, { "db": "CNVD", "id": "CNVD-2023-69811", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2023-3935", "trust": 0.1 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2023-69811" }, { "db": "VULMON", "id": "CVE-2023-3935" }, { "db": "JVNDB", "id": "JVNDB-2023-012536" }, { "db": "NVD", "id": "CVE-2023-3935" } ] }, "id": "VAR-202309-0672", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "CNVD", "id": "CNVD-2023-69811" } ], "trust": 1.1424276933333333 }, "iot_taxonomy": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "category": [ "ICS" ], "sub_category": null, "trust": 0.6 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2023-69811" } ] }, "last_update_date": "2024-01-29T15:51:24.364000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Patch for Siemens Industrial product WIBU system CodeMeter heap buffer overflow vulnerability", "trust": 0.6, "url": "https://www.cnvd.org.cn/patchinfo/show/460931" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2023-69811" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-787", "trust": 1.0 }, { "problemtype": "Out-of-bounds writing (CWE-787) [ others ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-012536" }, { "db": "NVD", "id": "CVE-2023-3935" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.9, "url": "https://cdn.wibu.com/fileadmin/wibu_downloads/security_advisories/advisorywibu-230704-01-v3.0.pdf" }, { "trust": 1.9, "url": "https://cert.vde.com/en/advisories/vde-2023-031/" }, { "trust": 1.8, "url": "https://cert.vde.com/en/advisories/vde-2023-030/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu98137233/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu92598492/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu92008538/index.html" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-3935" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-257-06" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-320-03" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-24-004-01" }, { "trust": 0.6, "url": "https://cert-portal.siemens.com/productcert/html/ssa-240541.html" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/787.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2023-69811" }, { "db": "VULMON", "id": "CVE-2023-3935" }, { "db": "JVNDB", "id": "JVNDB-2023-012536" }, { "db": "NVD", "id": "CVE-2023-3935" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "CNVD", "id": "CNVD-2023-69811" }, { "db": "VULMON", "id": "CVE-2023-3935" }, { "db": "JVNDB", "id": "JVNDB-2023-012536" }, { "db": "NVD", "id": "CVE-2023-3935" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-14T00:00:00", "db": "CNVD", "id": "CNVD-2023-69811" }, { "date": "2023-09-13T00:00:00", "db": "VULMON", "id": "CVE-2023-3935" }, { "date": "2023-12-18T00:00:00", "db": "JVNDB", "id": "JVNDB-2023-012536" }, { "date": "2023-09-13T14:15:09.147000", "db": "NVD", "id": "CVE-2023-3935" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-15T00:00:00", "db": "CNVD", "id": "CNVD-2023-69811" }, { "date": "2023-09-13T00:00:00", "db": "VULMON", "id": "CVE-2023-3935" }, { "date": "2024-01-09T02:47:00", "db": "JVNDB", "id": "JVNDB-2023-012536" }, { "date": "2024-01-25T20:24:58.783000", "db": "NVD", "id": "CVE-2023-3935" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Wibu-Systems\u00a0AG\u00a0 of \u00a0CodeMeter\u00a0Runtime\u00a0 Out-of-bounds write vulnerability in products from multiple vendors such as", "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-012536" } ], "trust": 0.8 } }
var-202210-0037
Vulnerability from variot
A weak randomness in WebCrypto keygen vulnerability exists in Node.js 18 due to a change with EntropySource() in SecretKeyGenTraits::DoKeyGen() in src/crypto/crypto_keygen.cc. There are two problems with this: 1) It does not check the return value, it assumes EntropySource() always succeeds, but it can (and sometimes will) fail. 2) The random data returned byEntropySource() may not be cryptographically strong and therefore not suitable as keying material. Node.js Foundation of Node.js Products from multiple other vendors have weak encryption. PRNG There is a vulnerability in the use of.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: nodejs:16 security update Advisory ID: RHSA-2022:6964-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:6964 Issue date: 2022-10-17 CVE Names: CVE-2022-35255 CVE-2022-35256 ==================================================================== 1. Summary:
An update for the nodejs:16 module is now available for Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.
The following packages have been upgraded to a later upstream version: nodejs 16.
Security Fix(es):
-
nodejs: weak randomness in WebCrypto keygen (CVE-2022-35255)
-
nodejs: HTTP Request Smuggling due to incorrect parsing of header fields (CVE-2022-35256)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2130517 - CVE-2022-35255 nodejs: weak randomness in WebCrypto keygen 2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields
- Package List:
Red Hat Enterprise Linux AppStream (v. 8):
Source: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.src.rpm nodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.src.rpm nodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.src.rpm
aarch64: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.aarch64.rpm
noarch: nodejs-docs-16.17.1-1.module+el8.6.0+16848+a483195a.noarch.rpm nodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.noarch.rpm nodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.noarch.rpm
ppc64le: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.ppc64le.rpm
s390x: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.s390x.rpm
x86_64: nodejs-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm nodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm npm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-35255 https://access.redhat.com/security/cve/CVE-2022-35256 https://access.redhat.com/security/updates/classification/#important
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY01tM9zjgjWX9erEAQgRRw/8DdK1QObq3so9+4ybaPFjCpdytAyNFy2E vrWNb7xRSO8myrQJ3cspxWMgRgfjMeJYPL8MT7iolW0SMWPd3uNMIh6ej3nK6zo+ BqHGgPBB2+knIF9ApMxW+2OpQAl4j0ICOeyLinqUXsyzDqPUOdW5kgNIPog668tc VsxB2Lt7pAJcpNkmwx6gvU5aZ6rWOUeNKyjAnat5AJPUx+NbtOtFWymivlPKCNWg bcGktfXz22tAixuEih9pC+YrPbJ++AHg5lZbK35uHBeGe7i9OdhbH8lbGrV5+0Vo 3DOlVTvuofjPZr0Ft50ChMsgsc/3pmBTXZOEfLrNHIMlJ2sHsP/3ZQ4hUmYYI3xs BF6HmgS4d3rEybSyXjqkQHKvSEi8KxBcs0y8RrvZeEUOfwTPwdaWKIhlzzn3lGYm a4iPlYzfCTfV4h2YdLvNE0hcOeaChiPVWvVxb9aV9XUW2ibWyHPSlJpBoP1UjMW4 8T0tYn6hUUWhWWT4cra5ipEjCmU9YfhdFsjoqKS/KFNA7kD94NSqWcbPs+3XnKbT l2IjXb8aBpn2Yykq1u4t12VEJCnKeTEUt43/LAlXW1mkNV3OQ2bPl2qwdEPTQxDP WBoK9aPtqD6W3VyuNza3VItmZKYw7nHtZL40YpvbdA6XtmlHZF6bFEiLdSwNduaV jippDtM0Pgw=vFcS -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq
Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548
Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.
For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.
We recommend that you upgrade your nodejs packages.
For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29
https://security.gentoo.org/
Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29
Synopsis
Multiple vulnerabilities have been discovered in Node.js.
Background
Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.
Affected packages
Package Vulnerable Unaffected
net-libs/nodejs < 16.20.2 >= 16.20.2
Description
Multiple vulnerabilities have been discovered in Node.js. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Node.js 20 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"
All Node.js 18 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"
All Node.js 16 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"
References
[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202405-29
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202210-0037", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "15.0.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.0.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "16.12.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.13.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "16.17.1" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "18.9.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "18.0.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "15.14.0" }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "node.js", "scope": null, "trust": 0.8, "vendor": "node js", "version": null }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-022576" }, { "db": "NVD", "id": "CVE-2022-35255" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "16.12.0", "versionStartIncluding": "16.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "16.17.1", "versionStartIncluding": "16.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "18.9.1", "versionStartIncluding": "18.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "15.14.0", "versionStartIncluding": "15.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-35255" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "169408" }, { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "169779" } ], "trust": 0.3 }, "cve": "CVE-2022-35255", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 9.1, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.2, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 9.1, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2022-35255", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-35255", "trust": 1.8, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202210-1268", "trust": 0.6, "value": "CRITICAL" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-022576" }, { "db": "CNNVD", "id": "CNNVD-202210-1268" }, { "db": "NVD", "id": "CVE-2022-35255" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A weak randomness in WebCrypto keygen vulnerability exists in Node.js 18 due to a change with EntropySource() in SecretKeyGenTraits::DoKeyGen() in src/crypto/crypto_keygen.cc. There are two problems with this: 1) It does not check the return value, it assumes EntropySource() always succeeds, but it can (and sometimes will) fail. 2) The random data returned byEntropySource() may not be cryptographically strong and therefore not suitable as keying material. Node.js Foundation of Node.js Products from multiple other vendors have weak encryption. PRNG There is a vulnerability in the use of.Information may be obtained and information may be tampered with. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: nodejs:16 security update\nAdvisory ID: RHSA-2022:6964-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6964\nIssue date: 2022-10-17\nCVE Names: CVE-2022-35255 CVE-2022-35256\n====================================================================\n1. Summary:\n\nAn update for the nodejs:16 module is now available for Red Hat Enterprise\nLinux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux AppStream (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nnodejs 16. \n\nSecurity Fix(es):\n\n* nodejs: weak randomness in WebCrypto keygen (CVE-2022-35255)\n\n* nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n(CVE-2022-35256)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2130517 - CVE-2022-35255 nodejs: weak randomness in WebCrypto keygen\n2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 8):\n\nSource:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.src.rpm\nnodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.src.rpm\nnodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.src.rpm\n\naarch64:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.aarch64.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.aarch64.rpm\n\nnoarch:\nnodejs-docs-16.17.1-1.module+el8.6.0+16848+a483195a.noarch.rpm\nnodejs-nodemon-2.0.19-2.module+el8.6.0+16240+7ca51420.noarch.rpm\nnodejs-packaging-25-1.module+el8.5.0+10992+fac5fe06.noarch.rpm\n\nppc64le:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.ppc64le.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.ppc64le.rpm\n\ns390x:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.s390x.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.s390x.rpm\n\nx86_64:\nnodejs-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-debuginfo-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-debugsource-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-devel-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnodejs-full-i18n-16.17.1-1.module+el8.6.0+16848+a483195a.x86_64.rpm\nnpm-8.15.0-1.16.17.1.1.module+el8.6.0+16848+a483195a.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-35255\nhttps://access.redhat.com/security/cve/CVE-2022-35256\nhttps://access.redhat.com/security/updates/classification/#important\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY01tM9zjgjWX9erEAQgRRw/8DdK1QObq3so9+4ybaPFjCpdytAyNFy2E\nvrWNb7xRSO8myrQJ3cspxWMgRgfjMeJYPL8MT7iolW0SMWPd3uNMIh6ej3nK6zo+\nBqHGgPBB2+knIF9ApMxW+2OpQAl4j0ICOeyLinqUXsyzDqPUOdW5kgNIPog668tc\nVsxB2Lt7pAJcpNkmwx6gvU5aZ6rWOUeNKyjAnat5AJPUx+NbtOtFWymivlPKCNWg\nbcGktfXz22tAixuEih9pC+YrPbJ++AHg5lZbK35uHBeGe7i9OdhbH8lbGrV5+0Vo\n3DOlVTvuofjPZr0Ft50ChMsgsc/3pmBTXZOEfLrNHIMlJ2sHsP/3ZQ4hUmYYI3xs\nBF6HmgS4d3rEybSyXjqkQHKvSEi8KxBcs0y8RrvZeEUOfwTPwdaWKIhlzzn3lGYm\na4iPlYzfCTfV4h2YdLvNE0hcOeaChiPVWvVxb9aV9XUW2ibWyHPSlJpBoP1UjMW4\n8T0tYn6hUUWhWWT4cra5ipEjCmU9YfhdFsjoqKS/KFNA7kD94NSqWcbPs+3XnKbT\nl2IjXb8aBpn2Yykq1u4t12VEJCnKeTEUt43/LAlXW1mkNV3OQ2bPl2qwdEPTQxDP\nWBoK9aPtqD6W3VyuNza3VItmZKYw7nHtZL40YpvbdA6XtmlHZF6bFEiLdSwNduaV\njippDtM0Pgw=vFcS\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1 security@debian.org\nhttps://www.debian.org/security/ Aron Xu\nJanuary 24, 2023 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : nodejs\nCVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: Node.js: Multiple Vulnerabilities\n Date: May 08, 2024\n Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. \n\nAffected packages\n================\nPackage Vulnerable Unaffected\n--------------- ------------ ------------\nnet-libs/nodejs \u003c 16.20.2 \u003e= 16.20.2\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in Node.js. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2022-35255" }, { "db": "JVNDB", "id": "JVNDB-2022-022576" }, { "db": "VULMON", "id": "CVE-2022-35255" }, { "db": "PACKETSTORM", "id": "169408" }, { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "169779" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" } ], "trust": 2.16 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-35255", "trust": 3.8 }, { "db": "HACKERONE", "id": "1690000", "trust": 2.4 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 2.4 }, { "db": "ICS CERT", "id": "ICSA-23-017-03", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU90782730", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-022576", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "169408", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169779", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "170727", "trust": 0.7 }, { "db": "AUSCERT", "id": "ESB-2022.5146", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202210-1268", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-35255", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168757", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "178512", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-35255" }, { "db": "JVNDB", "id": "JVNDB-2022-022576" }, { "db": "PACKETSTORM", "id": "169408" }, { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "169779" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202210-1268" }, { "db": "NVD", "id": "CVE-2022-35255" } ] }, "id": "VAR-202210-0037", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-05-12T03:18:39.471000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Node.js Fixing measures for security feature vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=216854" }, { "title": "Red Hat: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-35255" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-35255" }, { "db": "CNNVD", "id": "CNNVD-202210-1268" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-338", "trust": 1.0 }, { "problemtype": "Cryptographic weakness PRNG Use of (CWE-338) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-022576" }, { "db": "NVD", "id": "CVE-2022-35255" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.4, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 2.4, "url": "https://hackerone.com/reports/1690000" }, { "trust": 2.4, "url": "https://security.netapp.com/advisory/ntap-20230113-0002/" }, { "trust": 2.4, "url": "https://www.debian.org/security/2023/dsa-5326" }, { "trust": 1.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90782730/" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-017-03" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169408/red-hat-security-advisory-2022-6963-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5146" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169779/red-hat-security-advisory-2022-7821-01.html" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-35255/" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2022-35255" }, { "trust": 0.3, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.3, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.3, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-35256" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6963" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6964" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:7821" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/nodejs" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959" }, { "trust": 0.1, "url": "https://security.gentoo.org/glsa/202405-29" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-35255" }, { "db": "JVNDB", "id": "JVNDB-2022-022576" }, { "db": "PACKETSTORM", "id": "169408" }, { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "169779" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202210-1268" }, { "db": "NVD", "id": "CVE-2022-35255" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-35255" }, { "db": "JVNDB", "id": "JVNDB-2022-022576" }, { "db": "PACKETSTORM", "id": "169408" }, { "db": "PACKETSTORM", "id": "168757" }, { "db": "PACKETSTORM", "id": "169779" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202210-1268" }, { "db": "NVD", "id": "CVE-2022-35255" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-17T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-022576" }, { "date": "2022-10-18T22:30:35", "db": "PACKETSTORM", "id": "169408" }, { "date": "2022-10-18T14:27:29", "db": "PACKETSTORM", "id": "168757" }, { "date": "2022-11-08T13:50:31", "db": "PACKETSTORM", "id": "169779" }, { "date": "2023-01-25T16:09:12", "db": "PACKETSTORM", "id": "170727" }, { "date": "2024-05-09T15:46:44", "db": "PACKETSTORM", "id": "178512" }, { "date": "2022-10-18T00:00:00", "db": "CNNVD", "id": "CNNVD-202210-1268" }, { "date": "2022-12-05T22:15:10.513000", "db": "NVD", "id": "CVE-2022-35255" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-11-17T08:21:00", "db": "JVNDB", "id": "JVNDB-2022-022576" }, { "date": "2023-02-01T00:00:00", "db": "CNNVD", "id": "CNNVD-202210-1268" }, { "date": "2023-03-01T15:03:19.287000", "db": "NVD", "id": "CVE-2022-35255" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202210-1268" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Node.js\u00a0Foundation\u00a0 of \u00a0Node.js\u00a0 Cryptographic vulnerabilities in products from multiple other vendors \u00a0PRNG\u00a0 Vulnerability regarding the use of", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-022576" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "security feature problem", "sources": [ { "db": "CNNVD", "id": "CNNVD-202210-1268" } ], "trust": 0.6 } }
var-202109-1795
Vulnerability from variot
When sending data to an MQTT server, libcurl <= 7.73.0 and 7.78.0 could in some circumstances erroneously keep a pointer to an already freed memory area and both use that again in a subsequent call to send data and also free it again. Pillow is a Python-based image processing library. There is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. A use-after-free security issue has been found in the MQTT sending component of curl prior to 7.79.0. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2022-03-14-4 macOS Monterey 12.3
macOS Monterey 12.3 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT213183.
Accelerate Framework Available for: macOS Monterey Impact: Opening a maliciously crafted PDF file may lead to an unexpected application termination or arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2022-22633: an anonymous researcher
AMD Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-22669: an anonymous researcher
AppKit Available for: macOS Monterey Impact: A malicious application may be able to gain root privileges Description: A logic issue was addressed with improved validation. CVE-2022-22665: Lockheed Martin Red Team
AppleGraphicsControl Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22631: an anonymous researcher
AppleScript Available for: macOS Monterey Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved input validation. CVE-2022-22625: Mickey Jin (@patch1t) of Trend Micro
AppleScript Available for: macOS Monterey Impact: An application may be able to read restricted memory Description: This issue was addressed with improved checks. CVE-2022-22648: an anonymous researcher
AppleScript Available for: macOS Monterey Impact: Processing a maliciously crafted AppleScript binary may result in unexpected application termination or disclosure of process memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2022-22626: Mickey Jin (@patch1t) of Trend Micro CVE-2022-22627: Qi Sun and Robert Ai of Trend Micro
AppleScript Available for: macOS Monterey Impact: Processing a maliciously crafted file may lead to arbitrary code execution Description: A memory corruption issue was addressed with improved validation. CVE-2022-22597: Qi Sun and Robert Ai of Trend Micro
BOM Available for: macOS Monterey Impact: A maliciously crafted ZIP archive may bypass Gatekeeper checks Description: This issue was addressed with improved checks. CVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley (@jbradley89) of Jamf Software, Mickey Jin (@patch1t)
curl Available for: macOS Monterey Impact: Multiple issues in curl Description: Multiple issues were addressed by updating to curl version 7.79.1. CVE-2021-22946 CVE-2021-22947 CVE-2021-22945 CVE-2022-22623
FaceTime Available for: macOS Monterey Impact: A user may send audio and video in a FaceTime call without knowing that they have done so Description: This issue was addressed with improved checks. CVE-2022-22643: Sonali Luthar of the University of Virginia, Michael Liao of the University of Illinois at Urbana-Champaign, Rohan Pahwa of Rutgers University, and Bao Nguyen of the University of Florida
ImageIO Available for: macOS Monterey Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2022-22611: Xingyu Jin of Google
ImageIO Available for: macOS Monterey Impact: Processing a maliciously crafted image may lead to heap corruption Description: A memory consumption issue was addressed with improved memory handling. CVE-2022-22612: Xingyu Jin of Google
Intel Graphics Driver Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A type confusion issue was addressed with improved state handling. CVE-2022-22661: an anonymous researcher, Peterpan0927 of Alibaba Security Pandora Lab
IOGPUFamily Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-22641: Mohamed Ghannam (@_simo36)
Kernel Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22613: Alex, an anonymous researcher
Kernel Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A use after free issue was addressed with improved memory management. CVE-2022-22614: an anonymous researcher CVE-2022-22615: an anonymous researcher
Kernel Available for: macOS Monterey Impact: A malicious application may be able to elevate privileges Description: A logic issue was addressed with improved state management. CVE-2022-22632: Keegan Saunders
Kernel Available for: macOS Monterey Impact: An attacker in a privileged position may be able to perform a denial of service attack Description: A null pointer dereference was addressed with improved validation. CVE-2022-22638: derrek (@derrekr6)
Kernel Available for: macOS Monterey Impact: An application may be able to execute arbitrary code with kernel privileges Description: A memory corruption issue was addressed with improved validation. CVE-2022-22640: sqrtpwn
libarchive Available for: macOS Monterey Impact: Multiple issues in libarchive Description: Multiple memory corruption issues existed in libarchive. These issues were addressed with improved input validation. CVE-2021-36976
Login Window Available for: macOS Monterey Impact: A person with access to a Mac may be able to bypass Login Window Description: This issue was addressed with improved checks. CVE-2022-22647: an anonymous researcher
LoginWindow Available for: macOS Monterey Impact: A local attacker may be able to view the previous logged in user’s desktop from the fast user switching screen Description: An authentication issue was addressed with improved state management. CVE-2022-22656
GarageBand MIDI Available for: macOS Monterey Impact: Opening a maliciously crafted file may lead to unexpected application termination or arbitrary code execution Description: A memory initialization issue was addressed with improved memory handling. CVE-2022-22657: Brandon Perry of Atredis Partners
GarageBand MIDI Available for: macOS Monterey Impact: Opening a maliciously crafted file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2022-22664: Brandon Perry of Atredis Partners
NSSpellChecker Available for: macOS Monterey Impact: A malicious application may be able to access information about a user's contacts Description: A privacy issue existed in the handling of Contact cards. This was addressed with improved state management. CVE-2022-22644: an anonymous researcher
PackageKit Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: A logic issue was addressed with improved state management. CVE-2022-22617: Mickey Jin (@patch1t)
Preferences Available for: macOS Monterey Impact: A malicious application may be able to read other applications' settings Description: The issue was addressed with additional permissions checks. CVE-2022-22609: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020) of Tencent Security Xuanwu Lab (xlab.tencent.com)
QuickTime Player Available for: macOS Monterey Impact: A plug-in may be able to inherit the application's permissions and access user data Description: This issue was addressed with improved checks. CVE-2022-22650: Wojciech Reguła (@_r3ggi) of SecuRing
Safari Downloads Available for: macOS Monterey Impact: A maliciously crafted ZIP archive may bypass Gatekeeper checks Description: This issue was addressed with improved checks. CVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley (@jbradley89) of Jamf Software, Mickey Jin (@patch1t)
Sandbox Available for: macOS Monterey Impact: A malicious application may be able to bypass certain Privacy preferences Description: The issue was addressed with improved permissions logic. CVE-2022-22600: Sudhakar Muthumani of Primefort Private Limited, Khiem Tran
Siri Available for: macOS Monterey Impact: A person with physical access to a device may be able to use Siri to obtain some location information from the lock screen Description: A permissions issue was addressed with improved validation. CVE-2022-22599: Andrew Goldberg of the University of Texas at Austin, McCombs School of Business (linkedin.com/andrew-goldberg/)
SMB Available for: macOS Monterey Impact: A remote attacker may be able to cause unexpected system termination or corrupt kernel memory Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2022-22651: Felix Poulin-Belanger
SoftwareUpdate Available for: macOS Monterey Impact: An application may be able to gain elevated privileges Description: A logic issue was addressed with improved state management. CVE-2022-22639: Mickey Jin (@patch1t)
System Preferences Available for: macOS Monterey Impact: An app may be able to spoof system notifications and UI Description: This issue was addressed with a new entitlement. CVE-2022-22660: Guilherme Rambo of Best Buddy Apps (rambo.codes)
UIKit Available for: macOS Monterey Impact: A person with physical access to an iOS device may be able to see sensitive information via keyboard suggestions Description: This issue was addressed with improved checks. CVE-2022-22621: Joey Hewitt
Vim Available for: macOS Monterey Impact: Multiple issues in Vim Description: Multiple issues were addressed by updating Vim. CVE-2021-4136 CVE-2021-4166 CVE-2021-4173 CVE-2021-4187 CVE-2021-4192 CVE-2021-4193 CVE-2021-46059 CVE-2022-0128 CVE-2022-0156 CVE-2022-0158
VoiceOver Available for: macOS Monterey Impact: A user may be able to view restricted content from the lock screen Description: A lock screen issue was addressed with improved state management. CVE-2021-30918: an anonymous researcher
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may disclose sensitive user information Description: A cookie management issue was addressed with improved state management. WebKit Bugzilla: 232748 CVE-2022-22662: Prakash (@1lastBr3ath) of Threat Nix
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to code execution Description: A memory corruption issue was addressed with improved state management. WebKit Bugzilla: 232812 CVE-2022-22610: Quan Yin of Bigo Technology Live Client Team
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. WebKit Bugzilla: 233172 CVE-2022-22624: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab WebKit Bugzilla: 234147 CVE-2022-22628: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab
WebKit Available for: macOS Monterey Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A buffer overflow issue was addressed with improved memory handling. WebKit Bugzilla: 234966 CVE-2022-22629: Jeonghoon Shin at Theori working with Trend Micro Zero Day Initiative
WebKit Available for: macOS Monterey Impact: A malicious website may cause unexpected cross-origin behavior Description: A logic issue was addressed with improved state management. WebKit Bugzilla: 235294 CVE-2022-22637: Tom McKee of Google
Wi-Fi Available for: macOS Monterey Impact: A malicious application may be able to leak sensitive user information Description: A logic issue was addressed with improved restrictions. CVE-2022-22668: MrPhil17
xar Available for: macOS Monterey Impact: A local user may be able to write arbitrary files Description: A validation issue existed in the handling of symlinks. This issue was addressed with improved validation of symlinks. CVE-2022-22582: Richard Warren of NCC Group
Additional recognition
AirDrop We would like to acknowledge Omar Espino (omespino.com), Ron Masas of BreakPoint.sh for their assistance.
Bluetooth We would like to acknowledge an anonymous researcher, chenyuwang (@mzzzz__) of Tencent Security Xuanwu Lab for their assistance.
Face Gallery We would like to acknowledge Tian Zhang (@KhaosT) for their assistance.
Intel Graphics Driver We would like to acknowledge Jack Dates of RET2 Systems, Inc., Yinyi Wu (@3ndy1) for their assistance.
Local Authentication We would like to acknowledge an anonymous researcher for their assistance.
Notes We would like to acknowledge Nathaniel Ekoniak of Ennate Technologies for their assistance.
Password Manager We would like to acknowledge Maximilian Golla (@m33x) of Max Planck Institute for Security and Privacy (MPI-SP) for their assistance.
Siri We would like to acknowledge an anonymous researcher for their assistance.
syslog We would like to acknowledge Yonghwi Jin (@jinmo123) of Theori for their assistance.
TCC We would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive Security for their assistance.
UIKit We would like to acknowledge Tim Shadel of Day Logger, Inc. for their assistance.
WebKit We would like to acknowledge Abdullah Md Shaleh for their assistance.
WebKit Storage We would like to acknowledge Martin Bajanik of FingerprintJS for their assistance.
macOS Monterey 12.3 may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/ All information is also posted on the Apple Security Updates web site: https://support.apple.com/en-us/HT201222.
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmIv0O4ACgkQeC9qKD1p rhjGGRAAjqIyEzN+LAk+2uzHIMQNEwav9fqo/ZNoYAOzNgActK56PIC/PBM3SzHd LrGFKbBq/EMU4EqXT6ycB7/uZfaAZVCBDNo1qOoYNHXnKtGL2Z/96mV14qbSmRvC jfg1pC0G1jPTxJKvHhuQSZHDGj+BI458fwuTY48kjCnzlWf9dKr2kdjUjE38X9RM 0upKVKqY+oWdbn5jPwgZ408NOqzHrHDW1iIYd4v9UrKN3pfMGDzVZTr/offL6VFL osOVWv1IZvXrhPsrtd2KfG0hTHz71vShVZ7jGAsGEdC/mT79zwFbYuzBFy791xFa rizr/ZWGfWBSYy8O90d1l13lDlE739YPc/dt1mjcvP9FTnzMwBagy+6//zAVe0v/ KZOjmvtK5sRvrQH54E8qTYitdMpY2aZhfT6D8tcl+98TjxTDNXXj/gypdCXNWqyB L1PtFhTjQ0WnzUNB7sosM0zAjfZ1iPAZq0XHDQ6p6gEdVavNOHo/ekgibVm5f1pi kwBHkKyq55QbzipDWwXl6Owk/iaHPxgENYb78BpeUQSFei+IYDUsyLkPh3L95PHZ JSyKOtbBArlYOWcxlYHn+hDK8iotA1c/SHDefYOoNkp1uP853Ge09eWq+zMzUwEo GXXJYMi1Q8gmJ9wK/A3d/FKY4FBZxpByUUgjYhiMKTU5cSeihaI= =RiA+ -----END PGP SIGNATURE-----
. ========================================================================== Ubuntu Security Notice USN-5079-3 September 21, 2021
curl vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 18.04 LTS
Summary:
USN-5079-1 introduced a regression in curl.
Software Description: - curl: HTTP, HTTPS, and FTP client and client libraries
Details:
USN-5079-1 fixed vulnerabilities in curl. One of the fixes introduced a regression on Ubuntu 18.04 LTS. This update fixes the problem.
We apologize for the inconvenience. A remote attacker could use this issue to cause curl to crash, resulting in a denial of service, or possibly execute arbitrary code. (CVE-2021-22945) Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. When receiving certain responses from servers, curl would continue without TLS even when the option to require a successful upgrade to TLS was specified. (CVE-2021-22946) Patrick Monnerat discovered that curl incorrectly handled responses received before STARTTLS. A remote attacker could possibly use this issue to inject responses and intercept communications. (CVE-2021-22947)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 18.04 LTS: curl 7.58.0-2ubuntu3.16 libcurl3-gnutls 7.58.0-2ubuntu3.16 libcurl3-nss 7.58.0-2ubuntu3.16 libcurl4 7.58.0-2ubuntu3.16
In general, a standard system update will make all the necessary changes. These flaws may allow remote attackers to obtain sensitive information, leak authentication or cookie header data or facilitate a denial of service attack.
For the stable distribution (bullseye), these problems have been fixed in version 7.74.0-1.3+deb11u2.
We recommend that you upgrade your curl packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202212-01
https://security.gentoo.org/
Severity: High Title: curl: Multiple Vulnerabilities Date: December 19, 2022 Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365 ID: 202212-01
Synopsis
Multiple vulnerabilities have been found in curl, the worst of which could result in arbitrary code execution.
Background
A command line tool and library for transferring data with URLs.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-misc/curl < 7.86.0 >= 7.86.0
Description
Multiple vulnerabilities have been discovered in curl. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All curl users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-misc/curl-7.86.0"
References
[ 1 ] CVE-2021-22922 https://nvd.nist.gov/vuln/detail/CVE-2021-22922 [ 2 ] CVE-2021-22923 https://nvd.nist.gov/vuln/detail/CVE-2021-22923 [ 3 ] CVE-2021-22925 https://nvd.nist.gov/vuln/detail/CVE-2021-22925 [ 4 ] CVE-2021-22926 https://nvd.nist.gov/vuln/detail/CVE-2021-22926 [ 5 ] CVE-2021-22945 https://nvd.nist.gov/vuln/detail/CVE-2021-22945 [ 6 ] CVE-2021-22946 https://nvd.nist.gov/vuln/detail/CVE-2021-22946 [ 7 ] CVE-2021-22947 https://nvd.nist.gov/vuln/detail/CVE-2021-22947 [ 8 ] CVE-2022-22576 https://nvd.nist.gov/vuln/detail/CVE-2022-22576 [ 9 ] CVE-2022-27774 https://nvd.nist.gov/vuln/detail/CVE-2022-27774 [ 10 ] CVE-2022-27775 https://nvd.nist.gov/vuln/detail/CVE-2022-27775 [ 11 ] CVE-2022-27776 https://nvd.nist.gov/vuln/detail/CVE-2022-27776 [ 12 ] CVE-2022-27779 https://nvd.nist.gov/vuln/detail/CVE-2022-27779 [ 13 ] CVE-2022-27780 https://nvd.nist.gov/vuln/detail/CVE-2022-27780 [ 14 ] CVE-2022-27781 https://nvd.nist.gov/vuln/detail/CVE-2022-27781 [ 15 ] CVE-2022-27782 https://nvd.nist.gov/vuln/detail/CVE-2022-27782 [ 16 ] CVE-2022-30115 https://nvd.nist.gov/vuln/detail/CVE-2022-30115 [ 17 ] CVE-2022-32205 https://nvd.nist.gov/vuln/detail/CVE-2022-32205 [ 18 ] CVE-2022-32206 https://nvd.nist.gov/vuln/detail/CVE-2022-32206 [ 19 ] CVE-2022-32207 https://nvd.nist.gov/vuln/detail/CVE-2022-32207 [ 20 ] CVE-2022-32208 https://nvd.nist.gov/vuln/detail/CVE-2022-32208 [ 21 ] CVE-2022-32221 https://nvd.nist.gov/vuln/detail/CVE-2022-32221 [ 22 ] CVE-2022-35252 https://nvd.nist.gov/vuln/detail/CVE-2022-35252 [ 23 ] CVE-2022-35260 https://nvd.nist.gov/vuln/detail/CVE-2022-35260 [ 24 ] CVE-2022-42915 https://nvd.nist.gov/vuln/detail/CVE-2022-42915 [ 25 ] CVE-2022-42916 https://nvd.nist.gov/vuln/detail/CVE-2022-42916
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202212-01
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202109-1795", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0.1.1" }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "12.0.0" }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "8.0.26" }, { "model": "universal forwarder", "scope": "eq", "trust": 1.0, "vendor": "splunk", "version": "9.1.0" }, { "model": "cloud backup", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h300e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "clustered data ontap", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "universal forwarder", "scope": "gte", "trust": 1.0, "vendor": "splunk", "version": "8.2.0" }, { "model": "universal forwarder", "scope": "gte", "trust": 1.0, "vendor": "splunk", "version": "9.0.0" }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "12.3" }, { "model": "libcurl", "scope": "lte", "trust": 1.0, "vendor": "haxx", "version": "7.78.0" }, { "model": "universal forwarder", "scope": "lt", "trust": 1.0, "vendor": "splunk", "version": "9.0.6" }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h500e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "universal forwarder", "scope": "lt", "trust": 1.0, "vendor": "splunk", "version": "8.2.12" }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "33" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.0" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "5.7.0" }, { "model": "h700e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "mysql server", "scope": "lte", "trust": 1.0, "vendor": "oracle", "version": "5.7.35" }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "solidfire baseboard management controller", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "libcurl", "scope": "gte", "trust": 1.0, "vendor": "haxx", "version": "7.73.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-22945" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:haxx:libcurl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "7.78.0", "versionStartIncluding": "7.73.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:33:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:cloud_backup:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:clustered_data_ontap:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "8.0.26", "versionStartIncluding": "8.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.7.35", "versionStartIncluding": "5.7.0", "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:solidfire_baseboard_management_controller_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:solidfire_baseboard_management_controller:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "12.3", "versionStartIncluding": "12.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:9.1.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "9.0.6", "versionStartIncluding": "9.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:splunk:universal_forwarder:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "8.2.12", "versionStartIncluding": "8.2.0", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-22945" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Ubuntu", "sources": [ { "db": "PACKETSTORM", "id": "164171" }, { "db": "PACKETSTORM", "id": "164220" } ], "trust": 0.2 }, "cve": "CVE-2021-22945", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 4.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "VHN-381419", "impactScore": 4.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:P/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.1, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.2, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-22945", "trust": 1.0, "value": "CRITICAL" }, { "author": "CNNVD", "id": "CNNVD-202104-975", "trust": 0.6, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202109-998", "trust": 0.6, "value": "CRITICAL" }, { "author": "VULHUB", "id": "VHN-381419", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-381419" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202109-998" }, { "db": "NVD", "id": "CVE-2021-22945" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "When sending data to an MQTT server, libcurl \u003c= 7.73.0 and 7.78.0 could in some circumstances erroneously keep a pointer to an already freed memory area and both use that again in a subsequent call to send data and also free it *again*. Pillow is a Python-based image processing library. \nThere is currently no information about this vulnerability, please feel free to follow CNNVD or manufacturer announcements. A use-after-free security issue has been found in the MQTT sending component of curl prior to 7.79.0. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2022-03-14-4 macOS Monterey 12.3\n\nmacOS Monterey 12.3 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT213183. \n\nAccelerate Framework\nAvailable for: macOS Monterey\nImpact: Opening a maliciously crafted PDF file may lead to an\nunexpected application termination or arbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2022-22633: an anonymous researcher\n\nAMD\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22669: an anonymous researcher\n\nAppKit\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to gain root privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2022-22665: Lockheed Martin Red Team\n\nAppleGraphicsControl\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22631: an anonymous researcher\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2022-22625: Mickey Jin (@patch1t) of Trend Micro\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: An application may be able to read restricted memory\nDescription: This issue was addressed with improved checks. \nCVE-2022-22648: an anonymous researcher\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted AppleScript binary may\nresult in unexpected application termination or disclosure of process\nmemory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2022-22626: Mickey Jin (@patch1t) of Trend Micro\nCVE-2022-22627: Qi Sun and Robert Ai of Trend Micro\n\nAppleScript\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted file may lead to arbitrary\ncode execution\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-22597: Qi Sun and Robert Ai of Trend Micro\n\nBOM\nAvailable for: macOS Monterey\nImpact: A maliciously crafted ZIP archive may bypass Gatekeeper\nchecks\nDescription: This issue was addressed with improved checks. \nCVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley\n(@jbradley89) of Jamf Software, Mickey Jin (@patch1t)\n\ncurl\nAvailable for: macOS Monterey\nImpact: Multiple issues in curl\nDescription: Multiple issues were addressed by updating to curl\nversion 7.79.1. \nCVE-2021-22946\nCVE-2021-22947\nCVE-2021-22945\nCVE-2022-22623\n\nFaceTime\nAvailable for: macOS Monterey\nImpact: A user may send audio and video in a FaceTime call without\nknowing that they have done so\nDescription: This issue was addressed with improved checks. \nCVE-2022-22643: Sonali Luthar of the University of Virginia, Michael\nLiao of the University of Illinois at Urbana-Champaign, Rohan Pahwa\nof Rutgers University, and Bao Nguyen of the University of Florida\n\nImageIO\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2022-22611: Xingyu Jin of Google\n\nImageIO\nAvailable for: macOS Monterey\nImpact: Processing a maliciously crafted image may lead to heap\ncorruption\nDescription: A memory consumption issue was addressed with improved\nmemory handling. \nCVE-2022-22612: Xingyu Jin of Google\n\nIntel Graphics Driver\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A type confusion issue was addressed with improved state\nhandling. \nCVE-2022-22661: an anonymous researcher, Peterpan0927 of Alibaba\nSecurity Pandora Lab\n\nIOGPUFamily\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22641: Mohamed Ghannam (@_simo36)\n\nKernel\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22613: Alex, an anonymous researcher\n\nKernel\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2022-22614: an anonymous researcher\nCVE-2022-22615: an anonymous researcher\n\nKernel\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to elevate privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22632: Keegan Saunders\n\nKernel\nAvailable for: macOS Monterey\nImpact: An attacker in a privileged position may be able to perform a\ndenial of service attack\nDescription: A null pointer dereference was addressed with improved\nvalidation. \nCVE-2022-22638: derrek (@derrekr6)\n\nKernel\nAvailable for: macOS Monterey\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2022-22640: sqrtpwn\n\nlibarchive\nAvailable for: macOS Monterey\nImpact: Multiple issues in libarchive\nDescription: Multiple memory corruption issues existed in libarchive. \nThese issues were addressed with improved input validation. \nCVE-2021-36976\n\nLogin Window\nAvailable for: macOS Monterey\nImpact: A person with access to a Mac may be able to bypass Login\nWindow\nDescription: This issue was addressed with improved checks. \nCVE-2022-22647: an anonymous researcher\n\nLoginWindow\nAvailable for: macOS Monterey\nImpact: A local attacker may be able to view the previous logged in\nuser\u2019s desktop from the fast user switching screen\nDescription: An authentication issue was addressed with improved\nstate management. \nCVE-2022-22656\n\nGarageBand MIDI\nAvailable for: macOS Monterey\nImpact: Opening a maliciously crafted file may lead to unexpected\napplication termination or arbitrary code execution\nDescription: A memory initialization issue was addressed with\nimproved memory handling. \nCVE-2022-22657: Brandon Perry of Atredis Partners\n\nGarageBand MIDI\nAvailable for: macOS Monterey\nImpact: Opening a maliciously crafted file may lead to unexpected\napplication termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2022-22664: Brandon Perry of Atredis Partners\n\nNSSpellChecker\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to access information\nabout a user\u0027s contacts\nDescription: A privacy issue existed in the handling of Contact\ncards. This was addressed with improved state management. \nCVE-2022-22644: an anonymous researcher\n\nPackageKit\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22617: Mickey Jin (@patch1t)\n\nPreferences\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to read other\napplications\u0027 settings\nDescription: The issue was addressed with additional permissions\nchecks. \nCVE-2022-22609: Zhipeng Huo (@R3dF09) and Yuebin Sun (@yuebinsun2020)\nof Tencent Security Xuanwu Lab (xlab.tencent.com)\n\nQuickTime Player\nAvailable for: macOS Monterey\nImpact: A plug-in may be able to inherit the application\u0027s\npermissions and access user data\nDescription: This issue was addressed with improved checks. \nCVE-2022-22650: Wojciech Regu\u0142a (@_r3ggi) of SecuRing\n\nSafari Downloads\nAvailable for: macOS Monterey\nImpact: A maliciously crafted ZIP archive may bypass Gatekeeper\nchecks\nDescription: This issue was addressed with improved checks. \nCVE-2022-22616: Ferdous Saljooki (@malwarezoo) and Jaron Bradley\n(@jbradley89) of Jamf Software, Mickey Jin (@patch1t)\n\nSandbox\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to bypass certain Privacy\npreferences\nDescription: The issue was addressed with improved permissions logic. \nCVE-2022-22600: Sudhakar Muthumani of Primefort Private Limited,\nKhiem Tran\n\nSiri\nAvailable for: macOS Monterey\nImpact: A person with physical access to a device may be able to use\nSiri to obtain some location information from the lock screen\nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2022-22599: Andrew Goldberg of the University of Texas at Austin,\nMcCombs School of Business (linkedin.com/andrew-goldberg/)\n\nSMB\nAvailable for: macOS Monterey\nImpact: A remote attacker may be able to cause unexpected system\ntermination or corrupt kernel memory\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2022-22651: Felix Poulin-Belanger\n\nSoftwareUpdate\nAvailable for: macOS Monterey\nImpact: An application may be able to gain elevated privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2022-22639: Mickey Jin (@patch1t)\n\nSystem Preferences\nAvailable for: macOS Monterey\nImpact: An app may be able to spoof system notifications and UI\nDescription: This issue was addressed with a new entitlement. \nCVE-2022-22660: Guilherme Rambo of Best Buddy Apps (rambo.codes)\n\nUIKit\nAvailable for: macOS Monterey\nImpact: A person with physical access to an iOS device may be able to\nsee sensitive information via keyboard suggestions\nDescription: This issue was addressed with improved checks. \nCVE-2022-22621: Joey Hewitt\n\nVim\nAvailable for: macOS Monterey\nImpact: Multiple issues in Vim\nDescription: Multiple issues were addressed by updating Vim. \nCVE-2021-4136\nCVE-2021-4166\nCVE-2021-4173\nCVE-2021-4187\nCVE-2021-4192\nCVE-2021-4193\nCVE-2021-46059\nCVE-2022-0128\nCVE-2022-0156\nCVE-2022-0158\n\nVoiceOver\nAvailable for: macOS Monterey\nImpact: A user may be able to view restricted content from the lock\nscreen\nDescription: A lock screen issue was addressed with improved state\nmanagement. \nCVE-2021-30918: an anonymous researcher\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may disclose\nsensitive user information\nDescription: A cookie management issue was addressed with improved\nstate management. \nWebKit Bugzilla: 232748\nCVE-2022-22662: Prakash (@1lastBr3ath) of Threat Nix\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to code\nexecution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nWebKit Bugzilla: 232812\nCVE-2022-22610: Quan Yin of Bigo Technology Live Client Team\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nWebKit Bugzilla: 233172\nCVE-2022-22624: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab\nWebKit Bugzilla: 234147\nCVE-2022-22628: Kirin (@Pwnrin) of Tencent Security Xuanwu Lab\n\nWebKit\nAvailable for: macOS Monterey\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A buffer overflow issue was addressed with improved\nmemory handling. \nWebKit Bugzilla: 234966\nCVE-2022-22629: Jeonghoon Shin at Theori working with Trend Micro\nZero Day Initiative\n\nWebKit\nAvailable for: macOS Monterey\nImpact: A malicious website may cause unexpected cross-origin\nbehavior\nDescription: A logic issue was addressed with improved state\nmanagement. \nWebKit Bugzilla: 235294\nCVE-2022-22637: Tom McKee of Google\n\nWi-Fi\nAvailable for: macOS Monterey\nImpact: A malicious application may be able to leak sensitive user\ninformation\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2022-22668: MrPhil17\n\nxar\nAvailable for: macOS Monterey\nImpact: A local user may be able to write arbitrary files\nDescription: A validation issue existed in the handling of symlinks. \nThis issue was addressed with improved validation of symlinks. \nCVE-2022-22582: Richard Warren of NCC Group\n\nAdditional recognition\n\nAirDrop\nWe would like to acknowledge Omar Espino (omespino.com), Ron Masas of\nBreakPoint.sh for their assistance. \n\nBluetooth\nWe would like to acknowledge an anonymous researcher, chenyuwang\n(@mzzzz__) of Tencent Security Xuanwu Lab for their assistance. \n\nFace Gallery\nWe would like to acknowledge Tian Zhang (@KhaosT) for their\nassistance. \n\nIntel Graphics Driver\nWe would like to acknowledge Jack Dates of RET2 Systems, Inc., Yinyi\nWu (@3ndy1) for their assistance. \n\nLocal Authentication\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nNotes\nWe would like to acknowledge Nathaniel Ekoniak of Ennate Technologies\nfor their assistance. \n\nPassword Manager\nWe would like to acknowledge Maximilian Golla (@m33x) of Max Planck\nInstitute for Security and Privacy (MPI-SP) for their assistance. \n\nSiri\nWe would like to acknowledge an anonymous researcher for their\nassistance. \n\nsyslog\nWe would like to acknowledge Yonghwi Jin (@jinmo123) of Theori for\ntheir assistance. \n\nTCC\nWe would like to acknowledge Csaba Fitzl (@theevilbit) of Offensive\nSecurity for their assistance. \n\nUIKit\nWe would like to acknowledge Tim Shadel of Day Logger, Inc. for their\nassistance. \n\nWebKit\nWe would like to acknowledge Abdullah Md Shaleh for their assistance. \n\nWebKit Storage\nWe would like to acknowledge Martin Bajanik of FingerprintJS for\ntheir assistance. \n\nmacOS Monterey 12.3 may be obtained from the Mac App Store or Apple\u0027s\nSoftware Downloads web site: https://support.apple.com/downloads/\nAll information is also posted on the Apple Security Updates\nweb site: https://support.apple.com/en-us/HT201222. \n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEePiLW1MrMjw19XzoeC9qKD1prhgFAmIv0O4ACgkQeC9qKD1p\nrhjGGRAAjqIyEzN+LAk+2uzHIMQNEwav9fqo/ZNoYAOzNgActK56PIC/PBM3SzHd\nLrGFKbBq/EMU4EqXT6ycB7/uZfaAZVCBDNo1qOoYNHXnKtGL2Z/96mV14qbSmRvC\njfg1pC0G1jPTxJKvHhuQSZHDGj+BI458fwuTY48kjCnzlWf9dKr2kdjUjE38X9RM\n0upKVKqY+oWdbn5jPwgZ408NOqzHrHDW1iIYd4v9UrKN3pfMGDzVZTr/offL6VFL\nosOVWv1IZvXrhPsrtd2KfG0hTHz71vShVZ7jGAsGEdC/mT79zwFbYuzBFy791xFa\nrizr/ZWGfWBSYy8O90d1l13lDlE739YPc/dt1mjcvP9FTnzMwBagy+6//zAVe0v/\nKZOjmvtK5sRvrQH54E8qTYitdMpY2aZhfT6D8tcl+98TjxTDNXXj/gypdCXNWqyB\nL1PtFhTjQ0WnzUNB7sosM0zAjfZ1iPAZq0XHDQ6p6gEdVavNOHo/ekgibVm5f1pi\nkwBHkKyq55QbzipDWwXl6Owk/iaHPxgENYb78BpeUQSFei+IYDUsyLkPh3L95PHZ\nJSyKOtbBArlYOWcxlYHn+hDK8iotA1c/SHDefYOoNkp1uP853Ge09eWq+zMzUwEo\nGXXJYMi1Q8gmJ9wK/A3d/FKY4FBZxpByUUgjYhiMKTU5cSeihaI=\n=RiA+\n-----END PGP SIGNATURE-----\n\n\n. ==========================================================================\nUbuntu Security Notice USN-5079-3\nSeptember 21, 2021\n\ncurl vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 18.04 LTS\n\nSummary:\n\nUSN-5079-1 introduced a regression in curl. \n\nSoftware Description:\n- curl: HTTP, HTTPS, and FTP client and client libraries\n\nDetails:\n\nUSN-5079-1 fixed vulnerabilities in curl. One of the fixes introduced a\nregression on Ubuntu 18.04 LTS. This update fixes the problem. \n\nWe apologize for the inconvenience. A remote attacker could use this issue to cause curl to\n crash, resulting in a denial of service, or possibly execute arbitrary\n code. (CVE-2021-22945)\n Patrick Monnerat discovered that curl incorrectly handled upgrades to TLS. \n When receiving certain responses from servers, curl would continue without\n TLS even when the option to require a successful upgrade to TLS was\n specified. (CVE-2021-22946)\n Patrick Monnerat discovered that curl incorrectly handled responses\n received before STARTTLS. A remote attacker could possibly use this issue\n to inject responses and intercept communications. (CVE-2021-22947)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 18.04 LTS:\n curl 7.58.0-2ubuntu3.16\n libcurl3-gnutls 7.58.0-2ubuntu3.16\n libcurl3-nss 7.58.0-2ubuntu3.16\n libcurl4 7.58.0-2ubuntu3.16\n\nIn general, a standard system update will make all the necessary changes. These flaws may allow remote attackers to obtain sensitive\ninformation, leak authentication or cookie header data or facilitate a\ndenial of service attack. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 7.74.0-1.3+deb11u2. \n\nWe recommend that you upgrade your curl packages. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202212-01\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: High\n Title: curl: Multiple Vulnerabilities\n Date: December 19, 2022\n Bugs: #803308, #813270, #841302, #843824, #854708, #867679, #878365\n ID: 202212-01\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been found in curl, the worst of which\ncould result in arbitrary code execution. \n\nBackground\n=========\nA command line tool and library for transferring data with URLs. \n\nAffected packages\n================\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-misc/curl \u003c 7.86.0 \u003e= 7.86.0\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in curl. Please review the\nCVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll curl users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-misc/curl-7.86.0\"\n\nReferences\n=========\n[ 1 ] CVE-2021-22922\n https://nvd.nist.gov/vuln/detail/CVE-2021-22922\n[ 2 ] CVE-2021-22923\n https://nvd.nist.gov/vuln/detail/CVE-2021-22923\n[ 3 ] CVE-2021-22925\n https://nvd.nist.gov/vuln/detail/CVE-2021-22925\n[ 4 ] CVE-2021-22926\n https://nvd.nist.gov/vuln/detail/CVE-2021-22926\n[ 5 ] CVE-2021-22945\n https://nvd.nist.gov/vuln/detail/CVE-2021-22945\n[ 6 ] CVE-2021-22946\n https://nvd.nist.gov/vuln/detail/CVE-2021-22946\n[ 7 ] CVE-2021-22947\n https://nvd.nist.gov/vuln/detail/CVE-2021-22947\n[ 8 ] CVE-2022-22576\n https://nvd.nist.gov/vuln/detail/CVE-2022-22576\n[ 9 ] CVE-2022-27774\n https://nvd.nist.gov/vuln/detail/CVE-2022-27774\n[ 10 ] CVE-2022-27775\n https://nvd.nist.gov/vuln/detail/CVE-2022-27775\n[ 11 ] CVE-2022-27776\n https://nvd.nist.gov/vuln/detail/CVE-2022-27776\n[ 12 ] CVE-2022-27779\n https://nvd.nist.gov/vuln/detail/CVE-2022-27779\n[ 13 ] CVE-2022-27780\n https://nvd.nist.gov/vuln/detail/CVE-2022-27780\n[ 14 ] CVE-2022-27781\n https://nvd.nist.gov/vuln/detail/CVE-2022-27781\n[ 15 ] CVE-2022-27782\n https://nvd.nist.gov/vuln/detail/CVE-2022-27782\n[ 16 ] CVE-2022-30115\n https://nvd.nist.gov/vuln/detail/CVE-2022-30115\n[ 17 ] CVE-2022-32205\n https://nvd.nist.gov/vuln/detail/CVE-2022-32205\n[ 18 ] CVE-2022-32206\n https://nvd.nist.gov/vuln/detail/CVE-2022-32206\n[ 19 ] CVE-2022-32207\n https://nvd.nist.gov/vuln/detail/CVE-2022-32207\n[ 20 ] CVE-2022-32208\n https://nvd.nist.gov/vuln/detail/CVE-2022-32208\n[ 21 ] CVE-2022-32221\n https://nvd.nist.gov/vuln/detail/CVE-2022-32221\n[ 22 ] CVE-2022-35252\n https://nvd.nist.gov/vuln/detail/CVE-2022-35252\n[ 23 ] CVE-2022-35260\n https://nvd.nist.gov/vuln/detail/CVE-2022-35260\n[ 24 ] CVE-2022-42915\n https://nvd.nist.gov/vuln/detail/CVE-2022-42915\n[ 25 ] CVE-2022-42916\n https://nvd.nist.gov/vuln/detail/CVE-2022-42916\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202212-01\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2021-22945" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "VULHUB", "id": "VHN-381419" }, { "db": "VULMON", "id": "CVE-2021-22945" }, { "db": "PACKETSTORM", "id": "166319" }, { "db": "PACKETSTORM", "id": "164171" }, { "db": "PACKETSTORM", "id": "164220" }, { "db": "PACKETSTORM", "id": "169318" }, { "db": "PACKETSTORM", "id": "170303" } ], "trust": 2.07 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-22945", "trust": 2.3 }, { "db": "HACKERONE", "id": "1269242", "trust": 1.7 }, { "db": "SIEMENS", "id": "SSA-389290", "trust": 1.7 }, { "db": "PACKETSTORM", "id": "170303", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "166319", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "164171", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "164220", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169318", "trust": 0.7 }, { "db": "CS-HELP", "id": "SB2021041363", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202104-975", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3022", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2023.3146", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021091715", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022042569", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031433", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021092301", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021091514", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2021091601", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031104", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022062007", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202109-998", "trust": 0.6 }, { "db": "VULHUB", "id": "VHN-381419", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2021-22945", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-381419" }, { "db": "VULMON", "id": "CVE-2021-22945" }, { "db": "PACKETSTORM", "id": "166319" }, { "db": "PACKETSTORM", "id": "164171" }, { "db": "PACKETSTORM", "id": "164220" }, { "db": "PACKETSTORM", "id": "169318" }, { "db": "PACKETSTORM", "id": "170303" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202109-998" }, { "db": "NVD", "id": "CVE-2021-22945" } ] }, "id": "VAR-202109-1795", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-381419" } ], "trust": 0.30766129 }, "last_update_date": "2024-03-27T22:17:19.199000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "Haxx libcurl Remediation of resource management error vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=164671" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2021-22945 log" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-22945" }, { "db": "CNNVD", "id": "CNNVD-202109-998" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-415", "trust": 1.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-381419" }, { "db": "NVD", "id": "CVE-2021-22945" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://security.gentoo.org/glsa/202212-01" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-389290.pdf" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20211029-0003/" }, { "trust": 1.7, "url": "https://support.apple.com/kb/ht213183" }, { "trust": 1.7, "url": "https://www.debian.org/security/2022/dsa-5197" }, { "trust": 1.7, "url": "http://seclists.org/fulldisclosure/2022/mar/29" }, { "trust": 1.7, "url": "https://hackerone.com/reports/1269242" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22945" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/apoak4x73ejtaptsvt7irvdmuwvxnwgd/" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/rwlec6yvem2hwubx67sdgpsy4cqb72oe/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021041363" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/libcurl-reuse-after-free-via-mqtt-sending-36417" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-22945" }, { "trust": 0.6, "url": "https://www.ibm.com/support/pages/node/6495403" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170303/gentoo-linux-security-advisory-202212-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022042569" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164220/ubuntu-security-notice-usn-5079-3.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021092301" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2023.3146" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021091601" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022062007" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169318/debian-security-advisory-5197-1.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021091514" }, { "trust": 0.6, "url": "https://support.apple.com/en-us/ht213183" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2021091715" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166319/apple-security-advisory-2022-03-14-4.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3022" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/164171/ubuntu-security-notice-usn-5079-1.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031433" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031104" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22947" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22946" }, { "trust": 0.2, "url": "https://ubuntu.com/security/notices/usn-5079-1" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27782" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32205" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27775" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32206" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27774" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32207" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27781" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27776" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22576" }, { "trust": 0.1, "url": "http://seclists.org/oss-sec/2021/q3/166" }, { "trust": 0.1, "url": "https://security.archlinux.org/cve-2021-22945" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22609" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4173" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22612" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22610" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4136" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22616" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4192" }, { "trust": 0.1, "url": "https://support.apple.com/en-us/ht201222." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46059" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0156" }, { "trust": 0.1, "url": "https://support.apple.com/downloads/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0158" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22613" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4193" }, { "trust": 0.1, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22600" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36976" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22599" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4166" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0128" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22597" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22611" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22615" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4187" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22582" }, { "trust": 0.1, "url": "https://support.apple.com/ht213183." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22614" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.15" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/curl/7.68.0-1ubuntu2.7" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/curl/7.74.0-1ubuntu2.3" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5079-3" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/curl/7.58.0-2ubuntu3.16" }, { "trust": 0.1, "url": "https://launchpad.net/bugs/1944120" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/curl" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27779" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30115" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35260" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22926" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32208" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27780" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35252" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42916" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-42915" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32221" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" } ], "sources": [ { "db": "VULHUB", "id": "VHN-381419" }, { "db": "VULMON", "id": "CVE-2021-22945" }, { "db": "PACKETSTORM", "id": "166319" }, { "db": "PACKETSTORM", "id": "164171" }, { "db": "PACKETSTORM", "id": "164220" }, { "db": "PACKETSTORM", "id": "169318" }, { "db": "PACKETSTORM", "id": "170303" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202109-998" }, { "db": "NVD", "id": "CVE-2021-22945" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-381419" }, { "db": "VULMON", "id": "CVE-2021-22945" }, { "db": "PACKETSTORM", "id": "166319" }, { "db": "PACKETSTORM", "id": "164171" }, { "db": "PACKETSTORM", "id": "164220" }, { "db": "PACKETSTORM", "id": "169318" }, { "db": "PACKETSTORM", "id": "170303" }, { "db": "CNNVD", "id": "CNNVD-202104-975" }, { "db": "CNNVD", "id": "CNNVD-202109-998" }, { "db": "NVD", "id": "CVE-2021-22945" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-09-23T00:00:00", "db": "VULHUB", "id": "VHN-381419" }, { "date": "2022-03-15T15:49:02", "db": "PACKETSTORM", "id": "166319" }, { "date": "2021-09-15T15:27:42", "db": "PACKETSTORM", "id": "164171" }, { "date": "2021-09-21T15:39:10", "db": "PACKETSTORM", "id": "164220" }, { "date": "2022-08-28T19:12:00", "db": "PACKETSTORM", "id": "169318" }, { "date": "2022-12-19T13:48:31", "db": "PACKETSTORM", "id": "170303" }, { "date": "2021-04-13T00:00:00", "db": "CNNVD", "id": "CNNVD-202104-975" }, { "date": "2021-09-15T00:00:00", "db": "CNNVD", "id": "CNNVD-202109-998" }, { "date": "2021-09-23T13:15:08.690000", "db": "NVD", "id": "CVE-2021-22945" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-12-22T00:00:00", "db": "VULHUB", "id": "VHN-381419" }, { "date": "2021-04-14T00:00:00", "db": "CNNVD", "id": "CNNVD-202104-975" }, { "date": "2023-06-05T00:00:00", "db": "CNNVD", "id": "CNNVD-202109-998" }, { "date": "2024-03-27T15:04:30.460000", "db": "NVD", "id": "CVE-2021-22945" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "164171" }, { "db": "PACKETSTORM", "id": "164220" }, { "db": "PACKETSTORM", "id": "169318" }, { "db": "CNNVD", "id": "CNNVD-202109-998" } ], "trust": 0.9 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Pillow Buffer error vulnerability", "sources": [ { "db": "CNNVD", "id": "CNNVD-202104-975" } ], "trust": 0.6 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202104-975" } ], "trust": 0.6 } }
var-202203-0665
Vulnerability from variot
BIND 9.16.11 -> 9.16.26, 9.17.0 -> 9.18.0 and versions 9.16.11-S1 -> 9.16.26-S1 of the BIND Supported Preview Edition. Specifically crafted TCP streams can cause connections to BIND to remain in CLOSE_WAIT status for an indefinite period of time, even after the client has terminated the connection. BIND , even after the client closes the connection. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. This flaw allows a remote malicious user to manipulate cache results with incorrect records, leading to queries made to the wrong servers, possibly resulting in false information received on the client's end. This issue results in BIND consuming resources, leading to a denial of service. (CVE-2022-0396). ========================================================================== Ubuntu Security Notice USN-5332-1 March 17, 2022
bind9 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 21.10
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS
Summary:
Several security issues were fixed in Bind.
Software Description: - bind9: Internet Domain Name Server
Details:
Xiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind incorrectly handled certain bogus NS records when using forwarders. A remote attacker could possibly use this issue to manipulate cache results. This issue only affected Ubuntu 21.10. (CVE-2022-0396)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 21.10: bind9 1:9.16.15-1ubuntu1.2
Ubuntu 20.04 LTS: bind9 1:9.16.1-0ubuntu2.10
Ubuntu 18.04 LTS: bind9 1:9.11.3+dfsg-1ubuntu1.17
In general, a standard system update will make all the necessary changes.
For the oldstable distribution (buster), this problem has been fixed in version 1:9.11.5.P4+dfsg-5.1+deb10u7.
For the stable distribution (bullseye), this problem has been fixed in version 1:9.16.27-1~deb11u1.
We recommend that you upgrade your bind9 packages.
For the detailed security status of bind9 please refer to its security tracker page at: https://security-tracker.debian.org/tracker/bind9
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8 Tjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9 IgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW iRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G ELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV G/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl MuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8 glQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU rbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W bYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4 +8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8 QygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ= =6st2 -----END PGP SIGNATURE----- . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202210-25
https://security.gentoo.org/
Severity: Low Title: ISC BIND: Multiple Vulnerabilities Date: October 31, 2022 Bugs: #820563, #835439, #872206 ID: 202210-25
Synopsis
Multiple vulnerabilities have been discovered in ISC BIND, the worst of which could result in denial of service.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 net-dns/bind < 9.16.33 >= 9.16.33 2 net-dns/bind-tools < 9.16.33 >= 9.16.33
Description
Multiple vulnerabilities have been discovered in ISC BIND. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All ISC BIND users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-9.16.33"
All ISC BIND-tools users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-dns/bind-tools-9.16.33"
References
[ 1 ] CVE-2021-25219 https://nvd.nist.gov/vuln/detail/CVE-2021-25219 [ 2 ] CVE-2021-25220 https://nvd.nist.gov/vuln/detail/CVE-2021-25220 [ 3 ] CVE-2022-0396 https://nvd.nist.gov/vuln/detail/CVE-2022-0396 [ 4 ] CVE-2022-2795 https://nvd.nist.gov/vuln/detail/CVE-2022-2795 [ 5 ] CVE-2022-2881 https://nvd.nist.gov/vuln/detail/CVE-2022-2881 [ 6 ] CVE-2022-2906 https://nvd.nist.gov/vuln/detail/CVE-2022-2906 [ 7 ] CVE-2022-3080 https://nvd.nist.gov/vuln/detail/CVE-2022-3080 [ 8 ] CVE-2022-38177 https://nvd.nist.gov/vuln/detail/CVE-2022-38177 [ 9 ] CVE-2022-38178 https://nvd.nist.gov/vuln/detail/CVE-2022-38178
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-25
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: bind security update Advisory ID: RHSA-2022:8068-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:8068 Issue date: 2022-11-15 CVE Names: CVE-2021-25220 CVE-2022-0396 ==================================================================== 1. Summary:
An update for bind is now available for Red Hat Enterprise Linux 9.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat CodeReady Linux Builder (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64 Red Hat Enterprise Linux AppStream (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64
- Description:
The Berkeley Internet Name Domain (BIND) is an implementation of the Domain Name System (DNS) protocols. BIND includes a DNS server (named); a resolver library (routines for applications to use when interfacing with DNS); and tools for verifying that the DNS server is operating correctly.
Security Fix(es):
-
bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)
-
bind: DoS from specifically crafted TCP packets (CVE-2022-0396)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 9.1 Release Notes linked from the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
After installing the update, the BIND daemon (named) will be restarted automatically.
- Bugs fixed (https://bugzilla.redhat.com/):
2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability 2064513 - CVE-2022-0396 bind: DoS from specifically crafted TCP packets 2104863 - bind-doc is not shipped to public
- Package List:
Red Hat Enterprise Linux AppStream (v. 9):
Source: bind-9.16.23-5.el9_1.src.rpm
aarch64: bind-9.16.23-5.el9_1.aarch64.rpm bind-chroot-9.16.23-5.el9_1.aarch64.rpm bind-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-debugsource-9.16.23-5.el9_1.aarch64.rpm bind-dnssec-utils-9.16.23-5.el9_1.aarch64.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-libs-9.16.23-5.el9_1.aarch64.rpm bind-libs-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-utils-9.16.23-5.el9_1.aarch64.rpm bind-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm
noarch: bind-dnssec-doc-9.16.23-5.el9_1.noarch.rpm bind-license-9.16.23-5.el9_1.noarch.rpm python3-bind-9.16.23-5.el9_1.noarch.rpm
ppc64le: bind-9.16.23-5.el9_1.ppc64le.rpm bind-chroot-9.16.23-5.el9_1.ppc64le.rpm bind-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-debugsource-9.16.23-5.el9_1.ppc64le.rpm bind-dnssec-utils-9.16.23-5.el9_1.ppc64le.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-libs-9.16.23-5.el9_1.ppc64le.rpm bind-libs-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-utils-9.16.23-5.el9_1.ppc64le.rpm bind-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm
s390x: bind-9.16.23-5.el9_1.s390x.rpm bind-chroot-9.16.23-5.el9_1.s390x.rpm bind-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-debugsource-9.16.23-5.el9_1.s390x.rpm bind-dnssec-utils-9.16.23-5.el9_1.s390x.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-libs-9.16.23-5.el9_1.s390x.rpm bind-libs-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-utils-9.16.23-5.el9_1.s390x.rpm bind-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm
x86_64: bind-9.16.23-5.el9_1.x86_64.rpm bind-chroot-9.16.23-5.el9_1.x86_64.rpm bind-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-debugsource-9.16.23-5.el9_1.x86_64.rpm bind-dnssec-utils-9.16.23-5.el9_1.x86_64.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-libs-9.16.23-5.el9_1.x86_64.rpm bind-libs-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-utils-9.16.23-5.el9_1.x86_64.rpm bind-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm
Red Hat CodeReady Linux Builder (v. 9):
aarch64: bind-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-debugsource-9.16.23-5.el9_1.aarch64.rpm bind-devel-9.16.23-5.el9_1.aarch64.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-libs-debuginfo-9.16.23-5.el9_1.aarch64.rpm bind-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm
noarch: bind-doc-9.16.23-5.el9_1.noarch.rpm
ppc64le: bind-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-debugsource-9.16.23-5.el9_1.ppc64le.rpm bind-devel-9.16.23-5.el9_1.ppc64le.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-libs-debuginfo-9.16.23-5.el9_1.ppc64le.rpm bind-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm
s390x: bind-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-debugsource-9.16.23-5.el9_1.s390x.rpm bind-devel-9.16.23-5.el9_1.s390x.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-libs-debuginfo-9.16.23-5.el9_1.s390x.rpm bind-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm
x86_64: bind-debuginfo-9.16.23-5.el9_1.i686.rpm bind-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-debugsource-9.16.23-5.el9_1.i686.rpm bind-debugsource-9.16.23-5.el9_1.x86_64.rpm bind-devel-9.16.23-5.el9_1.i686.rpm bind-devel-9.16.23-5.el9_1.x86_64.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.i686.rpm bind-dnssec-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-libs-9.16.23-5.el9_1.i686.rpm bind-libs-debuginfo-9.16.23-5.el9_1.i686.rpm bind-libs-debuginfo-9.16.23-5.el9_1.x86_64.rpm bind-utils-debuginfo-9.16.23-5.el9_1.i686.rpm bind-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-25220 https://access.redhat.com/security/cve/CVE-2022-0396 https://access.redhat.com/security/updates/classification/#moderate https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBY3PhLdzjgjWX9erEAQhVSw/9HlIwMZZuRgTsbY2yARvJ+sRk08hViRo6 ++sV0vMtt3ym5eQES1al4uwAFbVH3B+EZLVuox02PnKVvIM35QnzVFxSa24HToTp l3tl+c9QnDwx3VGceX9og5o/ezSKqT8UeMQF/gamcB5kwGbbeb+Gp7cpSyXsmjB1 h418DMq/BBE1kLx2MAmIAn/r8x8ISsRbk3j96VEtLrQDtbSKCrE7jmQMaGRB4NhK 4pcgEdcVC6mpBIBRSoLqSVvY9cEdbWqB2LBKArSic/GS2RFfXiSTbPP+kHhd8WHF 0pHQpQa2CXqWuoyrk4cmlvyqmp+C1oCuwsjUWm3dIouIpLU3P1PH3Xua+DMcHfNl z3wW5E8hihVQ7taw/c6jKMlIrPVzdNM7zfdqV4PBoMQ6y6nPDP23wNGIBMIArjO/ n841K1Lzp1vrChLKgtYOK4H/s6Fbtb/+fe6Q5wOVPPEeksfoKzjJjZj/J7J+RymH Bd6n+f9iMQzOkj9zb6cgrvt2aLcr29XHfcCRH81i/CEPAEFGT86qOXqIZO0+qV/u qhHDKy3rLqYsOR4BlwhFhovUGCt8rBJ8LOiZlUTxzNG4PNze4F1hG1d0qzYQv0Iw zfOrgT8NGDmGCt2nwtmy813NDmzVegwrS7w0ayLzpcwcJMVOoO0nKi5kzX1slEyu rbPwX0ROLTo=0klO -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202203-0665", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "bind", "scope": "gte", "trust": 1.0, "vendor": "isc", "version": "9.16.11" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "34" }, { "model": "h500e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "bind", "scope": "lt", "trust": 1.0, "vendor": "isc", "version": "9.16.27" }, { "model": "h700e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "h300e", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "bind", "scope": "lte", "trust": 1.0, "vendor": "isc", "version": "9.18.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "bind", "scope": "gte", "trust": 1.0, "vendor": "isc", "version": "9.17.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "bind", "scope": null, "trust": 0.8, "vendor": "isc", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "esmpro/serveragent", "scope": null, "trust": 0.8, "vendor": "\u65e5\u672c\u96fb\u6c17", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "db": "NVD", "id": "CVE-2022-0396" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "9.18.0", "versionStartIncluding": "9.17.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:supported_preview:*:*:*", "cpe_name": [], "versionEndExcluding": "9.16.27", "versionStartIncluding": "9.16.11", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:isc:bind:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "9.16.27", "versionStartIncluding": "9.16.11", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:34:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-0396" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Siemens reported these vulnerabilities to CISA.", "sources": [ { "db": "CNNVD", "id": "CNNVD-202203-1543" } ], "trust": 0.6 }, "cve": "CVE-2022-0396", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 4.3, "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2022-0396", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "LOW", "baseScore": 5.3, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 1.4, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 2.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "OTHER", "availabilityImpact": "Low", "baseScore": 5.3, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "JVNDB-2022-001799", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-0396", "trust": 1.8, "value": "MEDIUM" }, { "author": "security-officer@isc.org", "id": "CVE-2022-0396", "trust": 1.0, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202203-1543", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2022-0396", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0396" }, { "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "db": "CNNVD", "id": "CNNVD-202203-1543" }, { "db": "NVD", "id": "CVE-2022-0396" }, { "db": "NVD", "id": "CVE-2022-0396" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "BIND 9.16.11 -\u003e 9.16.26, 9.17.0 -\u003e 9.18.0 and versions 9.16.11-S1 -\u003e 9.16.26-S1 of the BIND Supported Preview Edition. Specifically crafted TCP streams can cause connections to BIND to remain in CLOSE_WAIT status for an indefinite period of time, even after the client has terminated the connection. BIND , even after the client closes the connection. Bogus NS records supplied by the forwarders may be cached and used by name if it needs to recurse for any reason. This issue causes it to obtain and pass on potentially incorrect answers. This flaw allows a remote malicious user to manipulate cache results with incorrect records, leading to queries made to the wrong servers, possibly resulting in false information received on the client\u0027s end. This issue results in BIND consuming resources, leading to a denial of service. (CVE-2022-0396). ==========================================================================\nUbuntu Security Notice USN-5332-1\nMarch 17, 2022\n\nbind9 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 21.10\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in Bind. \n\nSoftware Description:\n- bind9: Internet Domain Name Server\n\nDetails:\n\nXiang Li, Baojun Liu, Chaoyi Lu, and Changgen Zou discovered that Bind\nincorrectly handled certain bogus NS records when using forwarders. A\nremote attacker could possibly use this issue to manipulate cache results. This issue only affected\nUbuntu 21.10. (CVE-2022-0396)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 21.10:\n bind9 1:9.16.15-1ubuntu1.2\n\nUbuntu 20.04 LTS:\n bind9 1:9.16.1-0ubuntu2.10\n\nUbuntu 18.04 LTS:\n bind9 1:9.11.3+dfsg-1ubuntu1.17\n\nIn general, a standard system update will make all the necessary changes. \n\nFor the oldstable distribution (buster), this problem has been fixed\nin version 1:9.11.5.P4+dfsg-5.1+deb10u7. \n\nFor the stable distribution (bullseye), this problem has been fixed in\nversion 1:9.16.27-1~deb11u1. \n\nWe recommend that you upgrade your bind9 packages. \n\nFor the detailed security status of bind9 please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/bind9\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmI010UACgkQEMKTtsN8\nTjbp3xAAil38qfAIdNkaIxY2bauvTyZDWzr6KUjph0vzmLEoAFQ3bysVSGlCnZk9\nIgdyfPRWQ+Bjau1/dlhNYaTlnQajbeyvCXfJcjRRgtUDCp7abZcOcb1WDu8jWLGW\niRtKsvKKrTKkIou5LgDlyqZyf6OzjgRdwtm86GDPQiCaSEpmbRt+APj5tkIA9R1G\nELWuZsjbIraBU0TsNfOalgNpAWtSBayxKtWB69J8rxUV69JI194A4AJ0wm9SPpFV\nG/TzlyHp1dUZJRLNmZOZU/dq4pPsXzh9I4QCg1kJWsVHe2ycAJKho6hr5iy43fNl\nMuokfI9YnU6/9SjHrQAWp1X/6MYCR8NieJ933W89/Zb8eTjTZC8EQGo6fkA287G8\nglQOrJHMQyV+b97lT67+ioTHNzTEBXTih7ZDeC1TlLqypCNYhRF/ll0Hx/oeiJFU\nrbjh2Og9huhD5JH8z8YAvY2g81e7KdPxazuKJnQpxGutqddCuwBvyI9fovYrah9W\nbYD6rskLZM2x90RI2LszHisl6FV5k37PaczamlRqGgbbMb9YlnDFjJUbM8rZZgD4\n+8u/AkHq2+11pTtZ40NYt1gpdidmIC/gzzha2TfZCHMs44KPMMdH+Fid1Kc6/Cq8\nQygtL4M387J9HXUrlN7NDUOrDVuVqfBG+ve3i9GCZzYjwtajTAQ=\n=6st2\n-----END PGP SIGNATURE-----\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-25\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: ISC BIND: Multiple Vulnerabilities\n Date: October 31, 2022\n Bugs: #820563, #835439, #872206\n ID: 202210-25\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in ISC BIND, the worst of\nwhich could result in denial of service. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 net-dns/bind \u003c 9.16.33 \u003e= 9.16.33\n 2 net-dns/bind-tools \u003c 9.16.33 \u003e= 9.16.33\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in ISC BIND. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll ISC BIND users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-9.16.33\"\n\nAll ISC BIND-tools users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-dns/bind-tools-9.16.33\"\n\nReferences\n==========\n\n[ 1 ] CVE-2021-25219\n https://nvd.nist.gov/vuln/detail/CVE-2021-25219\n[ 2 ] CVE-2021-25220\n https://nvd.nist.gov/vuln/detail/CVE-2021-25220\n[ 3 ] CVE-2022-0396\n https://nvd.nist.gov/vuln/detail/CVE-2022-0396\n[ 4 ] CVE-2022-2795\n https://nvd.nist.gov/vuln/detail/CVE-2022-2795\n[ 5 ] CVE-2022-2881\n https://nvd.nist.gov/vuln/detail/CVE-2022-2881\n[ 6 ] CVE-2022-2906\n https://nvd.nist.gov/vuln/detail/CVE-2022-2906\n[ 7 ] CVE-2022-3080\n https://nvd.nist.gov/vuln/detail/CVE-2022-3080\n[ 8 ] CVE-2022-38177\n https://nvd.nist.gov/vuln/detail/CVE-2022-38177\n[ 9 ] CVE-2022-38178\n https://nvd.nist.gov/vuln/detail/CVE-2022-38178\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-25\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: bind security update\nAdvisory ID: RHSA-2022:8068-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:8068\nIssue date: 2022-11-15\nCVE Names: CVE-2021-25220 CVE-2022-0396\n====================================================================\n1. Summary:\n\nAn update for bind is now available for Red Hat Enterprise Linux 9. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux AppStream (v. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. Description:\n\nThe Berkeley Internet Name Domain (BIND) is an implementation of the Domain\nName System (DNS) protocols. BIND includes a DNS server (named); a resolver\nlibrary (routines for applications to use when interfacing with DNS); and\ntools for verifying that the DNS server is operating correctly. \n\nSecurity Fix(es):\n\n* bind: DNS forwarders - cache poisoning vulnerability (CVE-2021-25220)\n\n* bind: DoS from specifically crafted TCP packets (CVE-2022-0396)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 9.1 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nAfter installing the update, the BIND daemon (named) will be restarted\nautomatically. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2064512 - CVE-2021-25220 bind: DNS forwarders - cache poisoning vulnerability\n2064513 - CVE-2022-0396 bind: DoS from specifically crafted TCP packets\n2104863 - bind-doc is not shipped to public\n\n6. Package List:\n\nRed Hat Enterprise Linux AppStream (v. 9):\n\nSource:\nbind-9.16.23-5.el9_1.src.rpm\n\naarch64:\nbind-9.16.23-5.el9_1.aarch64.rpm\nbind-chroot-9.16.23-5.el9_1.aarch64.rpm\nbind-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-debugsource-9.16.23-5.el9_1.aarch64.rpm\nbind-dnssec-utils-9.16.23-5.el9_1.aarch64.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-libs-9.16.23-5.el9_1.aarch64.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-utils-9.16.23-5.el9_1.aarch64.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm\n\nnoarch:\nbind-dnssec-doc-9.16.23-5.el9_1.noarch.rpm\nbind-license-9.16.23-5.el9_1.noarch.rpm\npython3-bind-9.16.23-5.el9_1.noarch.rpm\n\nppc64le:\nbind-9.16.23-5.el9_1.ppc64le.rpm\nbind-chroot-9.16.23-5.el9_1.ppc64le.rpm\nbind-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-debugsource-9.16.23-5.el9_1.ppc64le.rpm\nbind-dnssec-utils-9.16.23-5.el9_1.ppc64le.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-libs-9.16.23-5.el9_1.ppc64le.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-utils-9.16.23-5.el9_1.ppc64le.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\n\ns390x:\nbind-9.16.23-5.el9_1.s390x.rpm\nbind-chroot-9.16.23-5.el9_1.s390x.rpm\nbind-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-debugsource-9.16.23-5.el9_1.s390x.rpm\nbind-dnssec-utils-9.16.23-5.el9_1.s390x.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-libs-9.16.23-5.el9_1.s390x.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-utils-9.16.23-5.el9_1.s390x.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm\n\nx86_64:\nbind-9.16.23-5.el9_1.x86_64.rpm\nbind-chroot-9.16.23-5.el9_1.x86_64.rpm\nbind-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-debugsource-9.16.23-5.el9_1.x86_64.rpm\nbind-dnssec-utils-9.16.23-5.el9_1.x86_64.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-libs-9.16.23-5.el9_1.x86_64.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-utils-9.16.23-5.el9_1.x86_64.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm\n\nRed Hat CodeReady Linux Builder (v. 9):\n\naarch64:\nbind-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-debugsource-9.16.23-5.el9_1.aarch64.rpm\nbind-devel-9.16.23-5.el9_1.aarch64.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.aarch64.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.aarch64.rpm\n\nnoarch:\nbind-doc-9.16.23-5.el9_1.noarch.rpm\n\nppc64le:\nbind-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-debugsource-9.16.23-5.el9_1.ppc64le.rpm\nbind-devel-9.16.23-5.el9_1.ppc64le.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.ppc64le.rpm\n\ns390x:\nbind-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-debugsource-9.16.23-5.el9_1.s390x.rpm\nbind-devel-9.16.23-5.el9_1.s390x.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.s390x.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.s390x.rpm\n\nx86_64:\nbind-debuginfo-9.16.23-5.el9_1.i686.rpm\nbind-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-debugsource-9.16.23-5.el9_1.i686.rpm\nbind-debugsource-9.16.23-5.el9_1.x86_64.rpm\nbind-devel-9.16.23-5.el9_1.i686.rpm\nbind-devel-9.16.23-5.el9_1.x86_64.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.i686.rpm\nbind-dnssec-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-libs-9.16.23-5.el9_1.i686.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.i686.rpm\nbind-libs-debuginfo-9.16.23-5.el9_1.x86_64.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.i686.rpm\nbind-utils-debuginfo-9.16.23-5.el9_1.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-25220\nhttps://access.redhat.com/security/cve/CVE-2022-0396\nhttps://access.redhat.com/security/updates/classification/#moderate\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBY3PhLdzjgjWX9erEAQhVSw/9HlIwMZZuRgTsbY2yARvJ+sRk08hViRo6\n++sV0vMtt3ym5eQES1al4uwAFbVH3B+EZLVuox02PnKVvIM35QnzVFxSa24HToTp\nl3tl+c9QnDwx3VGceX9og5o/ezSKqT8UeMQF/gamcB5kwGbbeb+Gp7cpSyXsmjB1\nh418DMq/BBE1kLx2MAmIAn/r8x8ISsRbk3j96VEtLrQDtbSKCrE7jmQMaGRB4NhK\n4pcgEdcVC6mpBIBRSoLqSVvY9cEdbWqB2LBKArSic/GS2RFfXiSTbPP+kHhd8WHF\n0pHQpQa2CXqWuoyrk4cmlvyqmp+C1oCuwsjUWm3dIouIpLU3P1PH3Xua+DMcHfNl\nz3wW5E8hihVQ7taw/c6jKMlIrPVzdNM7zfdqV4PBoMQ6y6nPDP23wNGIBMIArjO/\nn841K1Lzp1vrChLKgtYOK4H/s6Fbtb/+fe6Q5wOVPPEeksfoKzjJjZj/J7J+RymH\nBd6n+f9iMQzOkj9zb6cgrvt2aLcr29XHfcCRH81i/CEPAEFGT86qOXqIZO0+qV/u\nqhHDKy3rLqYsOR4BlwhFhovUGCt8rBJ8LOiZlUTxzNG4PNze4F1hG1d0qzYQv0Iw\nzfOrgT8NGDmGCt2nwtmy813NDmzVegwrS7w0ayLzpcwcJMVOoO0nKi5kzX1slEyu\nrbPwX0ROLTo=0klO\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n", "sources": [ { "db": "NVD", "id": "CVE-2022-0396" }, { "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "db": "VULMON", "id": "CVE-2022-0396" }, { "db": "PACKETSTORM", "id": "166354" }, { "db": "PACKETSTORM", "id": "169261" }, { "db": "PACKETSTORM", "id": "169773" }, { "db": "PACKETSTORM", "id": "169587" }, { "db": "PACKETSTORM", "id": "169894" } ], "trust": 2.16 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-0396", "trust": 3.8 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 1.5 }, { "db": "JVN", "id": "JVNVU99475301", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU98927070", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2022-001799", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "166354", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169773", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169587", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169894", "trust": 0.7 }, { "db": "CS-HELP", "id": "SB2022031701", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031728", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022041925", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022032124", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1149", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1180", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.5750", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1719", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.1160", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202203-1543", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-0396", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169261", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0396" }, { "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "db": "PACKETSTORM", "id": "166354" }, { "db": "PACKETSTORM", "id": "169261" }, { "db": "PACKETSTORM", "id": "169773" }, { "db": "PACKETSTORM", "id": "169587" }, { "db": "PACKETSTORM", "id": "169894" }, { "db": "CNNVD", "id": "CNNVD-202203-1543" }, { "db": "NVD", "id": "CVE-2022-0396" } ] }, "id": "VAR-202203-0665", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-01-21T20:07:11.050000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "DoS\u00a0from\u00a0specifically\u00a0crafted\u00a0TCP\u00a0packets NEC NEC Product security information", "trust": 0.8, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/" }, { "title": "ISC BIND Remediation of resource management error vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=186055" }, { "title": "Ubuntu Security Notice: USN-5332-1: Bind vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5332-1" }, { "title": "Red Hat: Moderate: bind security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228068 - security advisory" }, { "title": "Debian Security Advisories: DSA-5105-1 bind9 -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=16d84b908a424f50b3236db9219500e3" }, { "title": "Arch Linux Advisories: [ASA-202204-5] bind: denial of service", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_advisories\u0026qid=asa-202204-5" }, { "title": "Arch Linux Issues: ", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=arch_linux_issues\u0026qid=cve-2022-0396" }, { "title": "Amazon Linux 2022: ALAS2022-2022-166", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-166" }, { "title": "Amazon Linux 2022: ALAS2022-2022-138", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=amazon_linux2022\u0026qid=alas2022-2022-138" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0396" }, { "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "db": "CNNVD", "id": "CNNVD-202203-1543" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-404", "trust": 1.0 }, { "problemtype": "Improper shutdown and release of resources (CWE-404) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "db": "NVD", "id": "CVE-2022-0396" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://kb.isc.org/v1/docs/cve-2022-0396" }, { "trust": 1.8, "url": "https://security.gentoo.org/glsa/202210-25" }, { "trust": 1.7, "url": "https://security.netapp.com/advisory/ntap-20220408-0001/" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0396" }, { "trust": 1.0, "url": "https://lists.fedoraproject.org/archives/list/package-announce%40lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/" }, { "trust": 0.9, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.8, "url": "http://jvn.jp/vu/jvnvu98927070/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu99475301/" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-0396" }, { "trust": 0.7, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/nyd7us4hzrfugaj66zthfbyvp5n3oqby/" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-0396/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/166354/ubuntu-security-notice-usn-5332-1.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169894/red-hat-security-advisory-2022-8068-01.html" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/isc-bind-denial-of-service-via-keep-response-order-tcp-connection-slots-37817" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031728" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1160" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169773/red-hat-security-advisory-2022-7643-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1180" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169587/gentoo-linux-security-advisory-202210-25.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022041925" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1719" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.5750" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031701" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022032124" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.1149" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25220" }, { "trust": 0.2, "url": "https://ubuntu.com/security/notices/usn-5332-1" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-25220" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.2, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.2, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.2, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/404.html" }, { "trust": 0.1, "url": "https://www.debian.org/security/2022/dsa-5105" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://alas.aws.amazon.com/al2022/alas-2022-166.html" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.1-0ubuntu2.10" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.16.15-1ubuntu1.2" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/bind9/1:9.11.3+dfsg-1ubuntu1.17" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/bind9" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.7_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:7643" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38178" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2906" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2881" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2795" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3080" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38177" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.1_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8068" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0396" }, { "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "db": "PACKETSTORM", "id": "166354" }, { "db": "PACKETSTORM", "id": "169261" }, { "db": "PACKETSTORM", "id": "169773" }, { "db": "PACKETSTORM", "id": "169587" }, { "db": "PACKETSTORM", "id": "169894" }, { "db": "CNNVD", "id": "CNNVD-202203-1543" }, { "db": "NVD", "id": "CVE-2022-0396" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-0396" }, { "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "db": "PACKETSTORM", "id": "166354" }, { "db": "PACKETSTORM", "id": "169261" }, { "db": "PACKETSTORM", "id": "169773" }, { "db": "PACKETSTORM", "id": "169587" }, { "db": "PACKETSTORM", "id": "169894" }, { "db": "CNNVD", "id": "CNNVD-202203-1543" }, { "db": "NVD", "id": "CVE-2022-0396" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-03-23T00:00:00", "db": "VULMON", "id": "CVE-2022-0396" }, { "date": "2022-05-12T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "date": "2022-03-17T15:54:20", "db": "PACKETSTORM", "id": "166354" }, { "date": "2022-03-28T19:12:00", "db": "PACKETSTORM", "id": "169261" }, { "date": "2022-11-08T13:49:24", "db": "PACKETSTORM", "id": "169773" }, { "date": "2022-10-31T14:50:53", "db": "PACKETSTORM", "id": "169587" }, { "date": "2022-11-16T16:09:16", "db": "PACKETSTORM", "id": "169894" }, { "date": "2022-03-16T00:00:00", "db": "CNNVD", "id": "CNNVD-202203-1543" }, { "date": "2022-03-23T11:15:08.380000", "db": "NVD", "id": "CVE-2022-0396" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-11-16T00:00:00", "db": "VULMON", "id": "CVE-2022-0396" }, { "date": "2022-09-20T06:14:00", "db": "JVNDB", "id": "JVNDB-2022-001799" }, { "date": "2022-11-17T00:00:00", "db": "CNNVD", "id": "CNNVD-202203-1543" }, { "date": "2024-01-21T02:05:10.713000", "db": "NVD", "id": "CVE-2022-0396" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "166354" }, { "db": "CNNVD", "id": "CNNVD-202203-1543" } ], "trust": 0.7 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "BIND\u00a0 connection indefinitely \u00a0CLOSE_WAIT\u00a0 Vulnerabilities that remain in status", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-001799" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "resource management error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202203-1543" } ], "trust": 0.6 } }
var-202102-1488
Vulnerability from variot
The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack. The function X509_issuer_and_serial_hash() is never directly called by OpenSSL itself so applications are only vulnerable if they use this function directly and they use it on certificates that may have been obtained from untrusted sources. OpenSSL versions 1.1.1i and below are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1j. OpenSSL versions 1.0.2x and below are affected by this issue. However OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.1.1j (Affected 1.1.1-1.1.1i). Fixed in OpenSSL 1.0.2y (Affected 1.0.2-1.0.2x). Please keep an eye on CNNVD or manufacturer announcements. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
APPLE-SA-2021-05-25-2 macOS Big Sur 11.4
macOS Big Sur 11.4 addresses the following issues. Information about the security content is also available at https://support.apple.com/HT212529.
AMD Available for: macOS Big Sur Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-30678: Yu Wang of Didi Research America
AMD Available for: macOS Big Sur Impact: A local user may be able to cause unexpected system termination or read kernel memory Description: A logic issue was addressed with improved state management. CVE-2021-30676: shrek_wzw
App Store Available for: macOS Big Sur Impact: A malicious application may be able to break out of its sandbox Description: A path handling issue was addressed with improved validation. CVE-2021-30688: Thijs Alkemade of Computest Research Division
AppleScript Available for: macOS Big Sur Impact: A malicious application may bypass Gatekeeper checks Description: A logic issue was addressed with improved state management. CVE-2021-30669: Yair Hoffmann
Audio Available for: macOS Big Sur Impact: Processing a maliciously crafted audio file may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30707: hjy79425575 working with Trend Micro Zero Day Initiative
Audio Available for: macOS Big Sur Impact: Parsing a maliciously crafted audio file may lead to disclosure of user information Description: This issue was addressed with improved checks. CVE-2021-30685: Mickey Jin (@patch1t) of Trend Micro
Core Services Available for: macOS Big Sur Impact: A malicious application may be able to gain root privileges Description: A validation issue existed in the handling of symlinks. This issue was addressed with improved validation of symlinks. CVE-2021-30681: Zhongcheng Li (CK01)
CoreAudio Available for: macOS Big Sur Impact: Processing a maliciously crafted audio file may disclose restricted memory Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30686: Mickey Jin of Trend Micro
Crash Reporter Available for: macOS Big Sur Impact: A malicious application may be able to modify protected parts of the file system Description: A logic issue was addressed with improved state management. CVE-2021-30727: Cees Elzinga
CVMS Available for: macOS Big Sur Impact: A local attacker may be able to elevate their privileges Description: This issue was addressed with improved checks. CVE-2021-30724: Mickey Jin (@patch1t) of Trend Micro
Dock Available for: macOS Big Sur Impact: A malicious application may be able to access a user's call history Description: An access issue was addressed with improved access restrictions. CVE-2021-30673: Josh Parnham (@joshparnham)
Graphics Drivers Available for: macOS Big Sur Impact: A remote attacker may cause an unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-30684: Liu Long of Ant Security Light-Year Lab
Graphics Drivers Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2021-30735: Jack Dates of RET2 Systems, Inc. (@ret2systems) working with Trend Micro Zero Day Initiative
Heimdal Available for: macOS Big Sur Impact: A local user may be able to leak sensitive user information Description: A logic issue was addressed with improved state management. CVE-2021-30697: Gabe Kirkpatrick (@gabe_k)
Heimdal Available for: macOS Big Sur Impact: A malicious application may cause a denial of service or potentially disclose memory contents Description: A memory corruption issue was addressed with improved state management. CVE-2021-30710: Gabe Kirkpatrick (@gabe_k)
Heimdal Available for: macOS Big Sur Impact: A malicious application could execute arbitrary code leading to compromise of user information Description: A use after free issue was addressed with improved memory management. CVE-2021-30683: Gabe Kirkpatrick (@gabe_k)
ImageIO Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to disclosure of user information Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30687: Hou JingYi (@hjy79425575) of Qihoo 360
ImageIO Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to disclosure of user information Description: This issue was addressed with improved checks. CVE-2021-30700: Ye Zhang(@co0py_Cat) of Baidu Security
ImageIO Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: This issue was addressed with improved checks. CVE-2021-30701: Mickey Jin (@patch1t) of Trend Micro and Ye Zhang of Baidu Security
ImageIO Available for: macOS Big Sur Impact: Processing a maliciously crafted ASTC file may disclose memory contents Description: This issue was addressed with improved checks. CVE-2021-30705: Ye Zhang of Baidu Security
Intel Graphics Driver Available for: macOS Big Sur Impact: A local user may be able to cause unexpected system termination or read kernel memory Description: An out-of-bounds read issue was addressed by removing the vulnerable code. CVE-2021-30719: an anonymous researcher working with Trend Micro Zero Day Initiative
Intel Graphics Driver Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: An out-of-bounds write issue was addressed with improved bounds checking. CVE-2021-30728: Liu Long of Ant Security Light-Year Lab CVE-2021-30726: Yinyi Wu(@3ndy1) of Qihoo 360 Vulcan Team
Kernel Available for: macOS Big Sur Impact: A malicious application may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved validation. CVE-2021-30740: Linus Henze (pinauten.de)
Kernel Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges Description: A logic issue was addressed with improved state management. CVE-2021-30704: an anonymous researcher
Kernel Available for: macOS Big Sur Impact: Processing a maliciously crafted message may lead to a denial of service Description: A logic issue was addressed with improved state management. CVE-2021-30715: The UK's National Cyber Security Centre (NCSC)
Kernel Available for: macOS Big Sur Impact: An application may be able to execute arbitrary code with kernel privileges Description: A buffer overflow was addressed with improved size validation. CVE-2021-30736: Ian Beer of Google Project Zero
Kernel Available for: macOS Big Sur Impact: A local attacker may be able to elevate their privileges Description: A memory corruption issue was addressed with improved validation. CVE-2021-30739: Zuozhi Fan (@pattern_F_) of Ant Group Tianqiong Security Lab
Kext Management Available for: macOS Big Sur Impact: A local user may be able to load unsigned kernel extensions Description: A logic issue was addressed with improved state management. CVE-2021-30680: Csaba Fitzl (@theevilbit) of Offensive Security
LaunchServices Available for: macOS Big Sur Impact: A malicious application may be able to break out of its sandbox Description: This issue was addressed with improved environment sanitization. CVE-2021-30677: Ron Waisberg (@epsilan)
Login Window Available for: macOS Big Sur Impact: A person with physical access to a Mac may be able to bypass Login Window Description: A logic issue was addressed with improved state management. CVE-2021-30702: Jewel Lambert of Original Spin, LLC.
Mail Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to misrepresent application state Description: A logic issue was addressed with improved state management. CVE-2021-30696: Fabian Ising and Damian Poddebniak of Münster University of Applied Sciences
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An information disclosure issue was addressed with improved state management. CVE-2021-30723: Mickey Jin (@patch1t) of Trend Micro CVE-2021-30691: Mickey Jin (@patch1t) of Trend Micro CVE-2021-30692: Mickey Jin (@patch1t) of Trend Micro CVE-2021-30694: Mickey Jin (@patch1t) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: A memory corruption issue was addressed with improved state management. CVE-2021-30725: Mickey Jin (@patch1t) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30746: Mickey Jin (@patch1t) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted image may lead to arbitrary code execution Description: A validation issue was addressed with improved logic. CVE-2021-30693: Mickey Jin (@patch1t) & Junzhi Lu (@pwn0rz) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may disclose memory contents Description: An out-of-bounds read was addressed with improved bounds checking. CVE-2021-30695: Mickey Jin (@patch1t) & Junzhi Lu (@pwn0rz) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may lead to unexpected application termination or arbitrary code execution Description: An out-of-bounds read was addressed with improved input validation. CVE-2021-30708: Mickey Jin (@patch1t) & Junzhi Lu (@pwn0rz) of Trend Micro
Model I/O Available for: macOS Big Sur Impact: Processing a maliciously crafted USD file may disclose memory contents Description: This issue was addressed with improved checks. CVE-2021-30709: Mickey Jin (@patch1t) of Trend Micro
NSOpenPanel Available for: macOS Big Sur Impact: An application may be able to gain elevated privileges Description: This issue was addressed by removing the vulnerable code. CVE-2021-30679: Gabe Kirkpatrick (@gabe_k)
OpenLDAP Available for: macOS Big Sur Impact: A remote attacker may be able to cause a denial of service Description: This issue was addressed with improved checks. CVE-2020-36226 CVE-2020-36227 CVE-2020-36223 CVE-2020-36224 CVE-2020-36225 CVE-2020-36221 CVE-2020-36228 CVE-2020-36222 CVE-2020-36230 CVE-2020-36229
PackageKit Available for: macOS Big Sur Impact: A malicious application may be able to overwrite arbitrary files Description: An issue with path validation logic for hardlinks was addressed with improved path sanitization. CVE-2021-30738: Qingyang Chen of Topsec Alpha Team and Csaba Fitzl (@theevilbit) of Offensive Security
Security Available for: macOS Big Sur Impact: Processing a maliciously crafted certificate may lead to arbitrary code execution Description: A memory corruption issue in the ASN.1 decoder was addressed by removing the vulnerable code. CVE-2021-30737: xerub
smbx Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to perform denial of service Description: A logic issue was addressed with improved state management. CVE-2021-30716: Aleksandar Nikolic of Cisco Talos
smbx Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to execute arbitrary code Description: A memory corruption issue was addressed with improved state management. CVE-2021-30717: Aleksandar Nikolic of Cisco Talos
smbx Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to leak sensitive user information Description: A path handling issue was addressed with improved validation. CVE-2021-30721: Aleksandar Nikolic of Cisco Talos
smbx Available for: macOS Big Sur Impact: An attacker in a privileged network position may be able to leak sensitive user information Description: An information disclosure issue was addressed with improved state management. CVE-2021-30722: Aleksandar Nikolic of Cisco Talos
smbx Available for: macOS Big Sur Impact: A remote attacker may be able to cause unexpected application termination or arbitrary code execution Description: A logic issue was addressed with improved state management. CVE-2021-30712: Aleksandar Nikolic of Cisco Talos
Software Update Available for: macOS Big Sur Impact: A person with physical access to a Mac may be able to bypass Login Window during a software update Description: This issue was addressed with improved checks. CVE-2021-30668: Syrus Kimiagar and Danilo Paffi Monteiro
SoftwareUpdate Available for: macOS Big Sur Impact: A non-privileged user may be able to modify restricted settings Description: This issue was addressed with improved checks. CVE-2021-30718: SiQian Wei of ByteDance Security
TCC Available for: macOS Big Sur Impact: A malicious application may be able to send unauthorized Apple events to Finder Description: A validation issue was addressed with improved logic. CVE-2021-30671: Ryan Bell (@iRyanBell)
TCC Available for: macOS Big Sur Impact: A malicious application may be able to bypass Privacy preferences. Apple is aware of a report that this issue may have been actively exploited. Description: A permissions issue was addressed with improved validation. CVE-2021-30713: an anonymous researcher
WebKit Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A cross-origin issue with iframe elements was addressed with improved tracking of security origins. CVE-2021-30744: Dan Hite of jsontop
WebKit Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: A use after free issue was addressed with improved memory management. CVE-2021-21779: Marcin Towalski of Cisco Talos
WebKit Available for: macOS Big Sur Impact: A malicious application may be able to leak sensitive user information Description: A logic issue was addressed with improved restrictions. CVE-2021-30682: an anonymous researcher and 1lastBr3ath
WebKit Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to universal cross site scripting Description: A logic issue was addressed with improved state management. CVE-2021-30689: an anonymous researcher
WebKit Available for: macOS Big Sur Impact: Processing maliciously crafted web content may lead to arbitrary code execution Description: Multiple memory corruption issues were addressed with improved memory handling. CVE-2021-30749: an anonymous researcher and mipu94 of SEFCOM lab, ASU. working with Trend Micro Zero Day Initiative CVE-2021-30734: Jack Dates of RET2 Systems, Inc. (@ret2systems) working with Trend Micro Zero Day Initiative
WebKit Available for: macOS Big Sur Impact: A malicious website may be able to access restricted ports on arbitrary servers Description: A logic issue was addressed with improved restrictions. CVE-2021-30720: David Schütz (@xdavidhu)
WebRTC Available for: macOS Big Sur Impact: A remote attacker may be able to cause a denial of service Description: A null pointer dereference was addressed with improved input validation. CVE-2021-23841: Tavis Ormandy of Google CVE-2021-30698: Tavis Ormandy of Google
Additional recognition
App Store We would like to acknowledge Thijs Alkemade of Computest Research Division for their assistance.
CoreCapture We would like to acknowledge Zuozhi Fan (@pattern_F_) of Ant- financial TianQiong Security Lab for their assistance.
ImageIO We would like to acknowledge Jzhu working with Trend Micro Zero Day Initiative and an anonymous researcher for their assistance.
Mail Drafts We would like to acknowledge Lauritz Holtmann (@lauritz) for their assistance.
WebKit We would like to acknowledge Chris Salls (@salls) of Makai Security for their assistance.
Installation note:
This update may be obtained from the Mac App Store or Apple's Software Downloads web site: https://support.apple.com/downloads/
Information will also be posted to the Apple Security Updates web site: https://support.apple.com/kb/HT201222
This message is signed with Apple's Product Security PGP key, and details are available at: https://www.apple.com/support/security/pgp/
-----BEGIN PGP SIGNATURE-----
iQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmCtU9AACgkQZcsbuWJ6 jjDC5g/+P0Hya9smOX6XVhxtnwe+vh2d5zOrKLBymdkvDPGw1UQoGOq08+7eu02Q vsManS/aP1UKNcMnbALHNFbFXv61ZjWi+71qgGGAQAe3EtYTJchBiIIyOBNIHoOJ 8X9sOeiyFzOOKw+GyVsBMNRL9Oh678USC4qgyyO5u2+Oexehu+6N9YNdAzwZgy6o muP+NlZ08s80ahRfq/6q8uKj7+Is0k5OEdxpWTnJOoXUDzZPj4Vo7H0HL6zjuqg3 CurJQABF3kDBWgZCvroMU6/HpbilGPE+JUFV7HPfaMe6iE3FsfrOq101w+/ovuNM hJ3yk/QENoh5BYdHKJo7zPVZBteGX20EVPdWfTsnz6a/hk568A+ICiupFIqwEuQv esIBWzgab9YUb2fAaZ071Z+lSn0Rj7tm3V/rhdwq19tYD3Q7BqEJ+YxYCH2zvyIB mP4/NoMpsDiTqFradR8Skac5uwINpZzAHjFyWLj0QVWVMxyQB8EGshR16YPkMryJ rjGyNIqZPcZ/Z6KJqpvNJrfI+b0oeqFMBUwpwK/7aQFPP/MvsM+UVSySipRiqwoa WAHMuY4SQwcseok7N6Rf+zAEYm9Nc+YglYpTW2taw6g0vWNIuCbyzPdC/Srrjw98 od2jLahPwyoBg6WBvXoZ6H4YOWFAywf225nYk3l5ATsG6rNbhYk= =Avma -----END PGP SIGNATURE-----
. Bugs fixed (https://bugzilla.redhat.com/):
1944888 - CVE-2021-21409 netty: Request smuggling via content-length header 2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn't allow setting size restrictions for decompressed data 2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn't restrict chunk length and may buffer skippable chunks in an unnecessary way 2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable
-
8) - aarch64, ppc64le, s390x, x86_64
Additional Changes:
For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.5 Release Notes linked from the References section. 1965362 - In renegotiated handshake openssl sends extensions which client didn't advertise in second ClientHello [rhel-8]
-
8) - noarch
-
Description:
EDK (Embedded Development Kit) is a project to enable UEFI support for Virtual Machines. This package contains a sample 64-bit UEFI firmware for QEMU and KVM.
The following packages have been upgraded to a later upstream version: edk2 (20210527gite1999b264f1f). -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Moderate: openssl security update Advisory ID: RHSA-2021:3798-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2021:3798 Issue date: 2021-10-12 CVE Names: CVE-2021-23840 CVE-2021-23841 =====================================================================
- Summary:
An update for openssl is now available for Red Hat Enterprise Linux 7.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Enterprise Linux Client (v. 7) - x86_64 Red Hat Enterprise Linux Client Optional (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode (v. 7) - x86_64 Red Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64 Red Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64 Red Hat Enterprise Linux Workstation (v. 7) - x86_64 Red Hat Enterprise Linux Workstation Optional (v. 7) - x86_64
- Description:
OpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, as well as a full-strength general-purpose cryptography library.
Security Fix(es):
-
openssl: integer overflow in CipherUpdate (CVE-2021-23840)
-
openssl: NULL pointer dereference in X509_issuer_and_serial_hash() (CVE-2021-23841)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
- Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
For the update to take effect, all services linked to the OpenSSL library must be restarted, or the system rebooted.
- Bugs fixed (https://bugzilla.redhat.com/):
1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash() 1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate
- Package List:
Red Hat Enterprise Linux Client (v. 7):
Source: openssl-1.0.2k-22.el7_9.src.rpm
x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Client Optional (v. 7):
x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode (v. 7):
Source: openssl-1.0.2k-22.el7_9.src.rpm
x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux ComputeNode Optional (v. 7):
x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server (v. 7):
Source: openssl-1.0.2k-22.el7_9.src.rpm
ppc64: openssl-1.0.2k-22.el7_9.ppc64.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm openssl-devel-1.0.2k-22.el7_9.ppc.rpm openssl-devel-1.0.2k-22.el7_9.ppc64.rpm openssl-libs-1.0.2k-22.el7_9.ppc.rpm openssl-libs-1.0.2k-22.el7_9.ppc64.rpm
ppc64le: openssl-1.0.2k-22.el7_9.ppc64le.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm openssl-devel-1.0.2k-22.el7_9.ppc64le.rpm openssl-libs-1.0.2k-22.el7_9.ppc64le.rpm
s390x: openssl-1.0.2k-22.el7_9.s390x.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm openssl-devel-1.0.2k-22.el7_9.s390.rpm openssl-devel-1.0.2k-22.el7_9.s390x.rpm openssl-libs-1.0.2k-22.el7_9.s390.rpm openssl-libs-1.0.2k-22.el7_9.s390x.rpm
x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Server Optional (v. 7):
ppc64: openssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm openssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm openssl-perl-1.0.2k-22.el7_9.ppc64.rpm openssl-static-1.0.2k-22.el7_9.ppc.rpm openssl-static-1.0.2k-22.el7_9.ppc64.rpm
ppc64le: openssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm openssl-perl-1.0.2k-22.el7_9.ppc64le.rpm openssl-static-1.0.2k-22.el7_9.ppc64le.rpm
s390x: openssl-debuginfo-1.0.2k-22.el7_9.s390.rpm openssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm openssl-perl-1.0.2k-22.el7_9.s390x.rpm openssl-static-1.0.2k-22.el7_9.s390.rpm openssl-static-1.0.2k-22.el7_9.s390x.rpm
x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation (v. 7):
Source: openssl-1.0.2k-22.el7_9.src.rpm
x86_64: openssl-1.0.2k-22.el7_9.x86_64.rpm openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-devel-1.0.2k-22.el7_9.i686.rpm openssl-devel-1.0.2k-22.el7_9.x86_64.rpm openssl-libs-1.0.2k-22.el7_9.i686.rpm openssl-libs-1.0.2k-22.el7_9.x86_64.rpm
Red Hat Enterprise Linux Workstation Optional (v. 7):
x86_64: openssl-debuginfo-1.0.2k-22.el7_9.i686.rpm openssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm openssl-perl-1.0.2k-22.el7_9.x86_64.rpm openssl-static-1.0.2k-22.el7_9.i686.rpm openssl-static-1.0.2k-22.el7_9.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2021-23840 https://access.redhat.com/security/cve/CVE-2021-23841 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2021 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYWWqjtzjgjWX9erEAQj4lg/+IFxqmMQqLSvyz8cKUAPgss/+/wFMpRgh ZZxYBQQ0cBFfWFlROVLaRdeiGcZYkyJCRDqy2Yb8YO1A4PnSOc+htLFYmSmU2kcm QLHinOzGEZo/44vN7Qsl4WhJkJIdlysCwKpkkOCUprMEnhlWMvja2eSSG9JLH16d RqGe4AsJQLKSKLgmhejCOqxb9am+t9zBW0zaZHP4UR52Ju1rG5rLjBJ85Gcrmp2B vp/GVEQ/Asid4MZA2WTx+s6wj5Dt7JOdLWrUbcYAC0I8oPWbAoZJTfPkM7S6Xv+U 68iruVFTh74IkCbQ+SNLoYjiDAVJqtAVRVBha7Fd3/gWR6aJLLaqluLRGvd0mwXY pohCS0ynuMQ9wtYOJ3ezSVcBN+/d9Hs/3s8RWQTzrNG6jtBe57H9/tNkeSVFSVvu PMKXsUoOrIUE2HCflJytDB9wkQmsWxiZoH/xVlrtD0D11egZ4EWjJL6x+xtCTAkT u67CAwsCKxxCeNmz42uBtXSwFXoUapJnsviGzAx247T2pyuXlYMYHlsOy7CtBvIk jEEosCMM72UyXO4XsYTXc0jM3ze6iQTcF9irwhy+X+rTB4IXBubdUEoT0jnKlwfI BQvoPEBlcG+f0VU8BL+FCOosvM0ZqC7KGGOwJLoG1Vqz8rbtmhpcmNAOvzUiHdm3 T4OjSl1NzQQ= =Taj2 -----END PGP SIGNATURE-----
-- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Red Hat Advanced Cluster Management for Kubernetes 2.1.12 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in.
Container updates:
-
RHACM 2.1.12 images (BZ# 2007489)
-
Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):
2007489 - RHACM 2.1.12 images 2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets 2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request 2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser 2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure 2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams 2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack 2011020 - CVE-2021-41099 redis: Integer overflow issue with strings
- Bugs fixed (https://bugzilla.redhat.com/):
1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment
- JIRA issues fixed (https://issues.jboss.org/):
LOG-1168 - Disable hostname verification in syslog TLS settings
LOG-1235 - Using HTTPS without a secret does not translate into the correct 'scheme' value in Fluentd
LOG-1375 - ssl_ca_cert should be optional
LOG-1378 - CLO should support sasl_plaintext(Password over http)
LOG-1392 - In fluentd config, flush_interval can't be set with flush_mode=immediate
LOG-1494 - Syslog output is serializing json incorrectly
LOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server
LOG-1575 - Rejected by Elasticsearch and unexpected json-parsing
LOG-1735 - Regression introducing flush_at_shutdown
LOG-1774 - The collector logs should be excluded in fluent.conf
LOG-1776 - fluentd total_limit_size sets value beyond available space
LOG-1822 - OpenShift Alerting Rules Style-Guide Compliance
LOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled
LOG-1862 - Unsupported kafka parameters when enabled Kafka SASL
LOG-1903 - Fix the Display of ClusterLogging type in OLM
LOG-1911 - CLF API changes to Opt-in to multiline error detection
LOG-1918 - Alert FluentdNodeDown
always firing
LOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding
- Red Hat OpenShift Container Storage is highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Container Storage provides a multicloud data management service with an S3 compatible API.
Bug Fix(es):
-
Previously, when the namespace store target was deleted, no alert was sent to the namespace bucket because of an issue in calculating the namespace bucket health. With this update, the issue in calculating the namespace bucket health is fixed and alerts are triggered as expected. (BZ#1993873)
-
Previously, the Multicloud Object Gateway (MCG) components performed slowly and there was a lot of pressure on the MCG components due to non-optimized database queries. With this update the non-optimized database queries are fixed which reduces the compute resources and time taken for queries. Bugs fixed (https://bugzilla.redhat.com/):
1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore's target bucket is deleted 2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input
- Bugs fixed (https://bugzilla.redhat.com/):
1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option 1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202102-1488", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.57" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1j" }, { "model": "mysql server", "scope": "gte", "trust": 1.0, "vendor": "oracle", "version": "8.0.15" }, { "model": "iphone os", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.6" }, { "model": "mysql server", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "enterprise manager ops center", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.4.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.4.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "snapcenter", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "19.3.5" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.58" }, { "model": "mysql server", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "5.7.33" }, { "model": "safari", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.1.1" }, { "model": "macos", "scope": "gte", "trust": 1.0, "vendor": "apple", "version": "11.1" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.2.1.3.0" }, { "model": "tenable.sc", "scope": "gte", "trust": 1.0, "vendor": "tenable", "version": "5.13.0" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.5.0.0.0" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.0.0.2" }, { "model": "oncommand workflow automation", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "ipados", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "14.6" }, { "model": "communications cloud native core policy", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "1.15.0" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.0.2y" }, { "model": "oncommand insight", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.12.1" }, { "model": "macos", "scope": "lt", "trust": 1.0, "vendor": "apple", "version": "11.4" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.0" }, { "model": "mysql enterprise monitor", "scope": "lt", "trust": 1.0, "vendor": "oracle", "version": "8.0.23" }, { "model": "business intelligence", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "5.9.0.0.0" }, { "model": "nessus network monitor", "scope": "eq", "trust": 1.0, "vendor": "tenable", "version": "5.11.1" }, { "model": "essbase", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "21.2" }, { "model": "graalvm", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "20.3.1.2" }, { "model": "tenable.sc", "scope": "lte", "trust": 1.0, "vendor": "tenable", "version": "5.17.0" }, { "model": "zfs storage appliance kit", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.8" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "enterprise manager for storage management", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "13.4.0.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-23841" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0.2y", "versionStartIncluding": "1.0.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1j", "versionStartIncluding": "1.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:tenable:tenable.sc:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "5.17.0", "versionStartIncluding": "5.13.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.12.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.13.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:tenable:nessus_network_monitor:5.11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:apple:macos:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "11.4", "versionStartIncluding": "11.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:iphone_os:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.6", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:apple:safari:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.1.1", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:apple:ipados:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "14.6", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:netapp:snapcenter:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_workflow_automation:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:netapp:oncommand_insight:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.3.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.57:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:12.2.1.4.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:5.5.0.0.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_for_storage_management:13.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:zfs_storage_appliance_kit:8.8:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:20.3.1.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:21.0.0.2:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:graalvm:19.3.5:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "8.0.23", "versionStartIncluding": "8.0.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_server:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "5.7.33", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:mysql_enterprise_monitor:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "8.0.23", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:essbase:21.2:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:business_intelligence:5.9.0.0.0:*:*:*:enterprise:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:1.15.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-23841" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" } ], "trust": 0.9 }, "cve": "CVE-2021-23841", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:N/I:N/A:P", "version": "2.0" }, { "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "PARTIAL", "baseScore": 4.3, "confidentialityImpact": "NONE", "exploitabilityScore": 8.6, "id": "VHN-382524", "impactScore": 2.9, "integrityImpact": "NONE", "severity": "MEDIUM", "trust": 0.1, "vectorString": "AV:N/AC:M/AU:N/C:N/I:N/A:P", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:H", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-23841", "trust": 1.0, "value": "MEDIUM" }, { "author": "VULHUB", "id": "VHN-382524", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "The OpenSSL public API function X509_issuer_and_serial_hash() attempts to create a unique hash value based on the issuer and serial number data contained within an X509 certificate. However it fails to correctly handle any errors that may occur while parsing the issuer field (which might occur if the issuer field is maliciously constructed). This may subsequently result in a NULL pointer deref and a crash leading to a potential denial of service attack. The function X509_issuer_and_serial_hash() is never directly called by OpenSSL itself so applications are only vulnerable if they use this function directly and they use it on certificates that may have been obtained from untrusted sources. OpenSSL versions 1.1.1i and below are affected by this issue. Users of these versions should upgrade to OpenSSL 1.1.1j. OpenSSL versions 1.0.2x and below are affected by this issue. However OpenSSL 1.0.2 is out of support and no longer receiving public updates. Premium support customers of OpenSSL 1.0.2 should upgrade to 1.0.2y. Other users should upgrade to 1.1.1j. Fixed in OpenSSL 1.1.1j (Affected 1.1.1-1.1.1i). Fixed in OpenSSL 1.0.2y (Affected 1.0.2-1.0.2x). Please keep an eye on CNNVD or manufacturer announcements. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\nAPPLE-SA-2021-05-25-2 macOS Big Sur 11.4\n\nmacOS Big Sur 11.4 addresses the following issues. \nInformation about the security content is also available at\nhttps://support.apple.com/HT212529. \n\nAMD\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30678: Yu Wang of Didi Research America\n\nAMD\nAvailable for: macOS Big Sur\nImpact: A local user may be able to cause unexpected system\ntermination or read kernel memory\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30676: shrek_wzw\n\nApp Store\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to break out of its\nsandbox\nDescription: A path handling issue was addressed with improved\nvalidation. \nCVE-2021-30688: Thijs Alkemade of Computest Research Division\n\nAppleScript\nAvailable for: macOS Big Sur\nImpact: A malicious application may bypass Gatekeeper checks\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30669: Yair Hoffmann\n\nAudio\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted audio file may lead to\narbitrary code execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30707: hjy79425575 working with Trend Micro Zero Day\nInitiative\n\nAudio\nAvailable for: macOS Big Sur\nImpact: Parsing a maliciously crafted audio file may lead to\ndisclosure of user information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30685: Mickey Jin (@patch1t) of Trend Micro\n\nCore Services\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to gain root privileges\nDescription: A validation issue existed in the handling of symlinks. \nThis issue was addressed with improved validation of symlinks. \nCVE-2021-30681: Zhongcheng Li (CK01)\n\nCoreAudio\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted audio file may disclose\nrestricted memory\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30686: Mickey Jin of Trend Micro\n\nCrash Reporter\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to modify protected parts\nof the file system\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30727: Cees Elzinga\n\nCVMS\nAvailable for: macOS Big Sur\nImpact: A local attacker may be able to elevate their privileges\nDescription: This issue was addressed with improved checks. \nCVE-2021-30724: Mickey Jin (@patch1t) of Trend Micro\n\nDock\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to access a user\u0027s call\nhistory\nDescription: An access issue was addressed with improved access\nrestrictions. \nCVE-2021-30673: Josh Parnham (@joshparnham)\n\nGraphics Drivers\nAvailable for: macOS Big Sur\nImpact: A remote attacker may cause an unexpected application\ntermination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30684: Liu Long of Ant Security Light-Year Lab\n\nGraphics Drivers\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2021-30735: Jack Dates of RET2 Systems, Inc. (@ret2systems)\nworking with Trend Micro Zero Day Initiative\n\nHeimdal\nAvailable for: macOS Big Sur\nImpact: A local user may be able to leak sensitive user information\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30697: Gabe Kirkpatrick (@gabe_k)\n\nHeimdal\nAvailable for: macOS Big Sur\nImpact: A malicious application may cause a denial of service or\npotentially disclose memory contents\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30710: Gabe Kirkpatrick (@gabe_k)\n\nHeimdal\nAvailable for: macOS Big Sur\nImpact: A malicious application could execute arbitrary code leading\nto compromise of user information\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-30683: Gabe Kirkpatrick (@gabe_k)\n\nImageIO\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to disclosure\nof user information\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30687: Hou JingYi (@hjy79425575) of Qihoo 360\n\nImageIO\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to disclosure\nof user information\nDescription: This issue was addressed with improved checks. \nCVE-2021-30700: Ye Zhang(@co0py_Cat) of Baidu Security\n\nImageIO\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: This issue was addressed with improved checks. \nCVE-2021-30701: Mickey Jin (@patch1t) of Trend Micro and Ye Zhang of\nBaidu Security\n\nImageIO\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted ASTC file may disclose\nmemory contents\nDescription: This issue was addressed with improved checks. \nCVE-2021-30705: Ye Zhang of Baidu Security\n\nIntel Graphics Driver\nAvailable for: macOS Big Sur\nImpact: A local user may be able to cause unexpected system\ntermination or read kernel memory\nDescription: An out-of-bounds read issue was addressed by removing\nthe vulnerable code. \nCVE-2021-30719: an anonymous researcher working with Trend Micro Zero\nDay Initiative\n\nIntel Graphics Driver\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: An out-of-bounds write issue was addressed with improved\nbounds checking. \nCVE-2021-30728: Liu Long of Ant Security Light-Year Lab\nCVE-2021-30726: Yinyi Wu(@3ndy1) of Qihoo 360 Vulcan Team\n\nKernel\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to execute arbitrary code\nwith kernel privileges\nDescription: A logic issue was addressed with improved validation. \nCVE-2021-30740: Linus Henze (pinauten.de)\n\nKernel\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30704: an anonymous researcher\n\nKernel\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted message may lead to a denial\nof service\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30715: The UK\u0027s National Cyber Security Centre (NCSC)\n\nKernel\nAvailable for: macOS Big Sur\nImpact: An application may be able to execute arbitrary code with\nkernel privileges\nDescription: A buffer overflow was addressed with improved size\nvalidation. \nCVE-2021-30736: Ian Beer of Google Project Zero\n\nKernel\nAvailable for: macOS Big Sur\nImpact: A local attacker may be able to elevate their privileges\nDescription: A memory corruption issue was addressed with improved\nvalidation. \nCVE-2021-30739: Zuozhi Fan (@pattern_F_) of Ant Group Tianqiong\nSecurity Lab\n\nKext Management\nAvailable for: macOS Big Sur\nImpact: A local user may be able to load unsigned kernel extensions\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30680: Csaba Fitzl (@theevilbit) of Offensive Security\n\nLaunchServices\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to break out of its\nsandbox\nDescription: This issue was addressed with improved environment\nsanitization. \nCVE-2021-30677: Ron Waisberg (@epsilan)\n\nLogin Window\nAvailable for: macOS Big Sur\nImpact: A person with physical access to a Mac may be able to bypass\nLogin Window\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30702: Jewel Lambert of Original Spin, LLC. \n\nMail\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nmisrepresent application state\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30696: Fabian Ising and Damian Poddebniak of M\u00fcnster\nUniversity of Applied Sciences\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An information disclosure issue was addressed with\nimproved state management. \nCVE-2021-30723: Mickey Jin (@patch1t) of Trend Micro\nCVE-2021-30691: Mickey Jin (@patch1t) of Trend Micro\nCVE-2021-30692: Mickey Jin (@patch1t) of Trend Micro\nCVE-2021-30694: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30725: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30746: Mickey Jin (@patch1t) of Trend Micro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted image may lead to arbitrary\ncode execution\nDescription: A validation issue was addressed with improved logic. \nCVE-2021-30693: Mickey Jin (@patch1t) \u0026 Junzhi Lu (@pwn0rz) of Trend\nMicro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: An out-of-bounds read was addressed with improved bounds\nchecking. \nCVE-2021-30695: Mickey Jin (@patch1t) \u0026 Junzhi Lu (@pwn0rz) of Trend\nMicro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may lead to\nunexpected application termination or arbitrary code execution\nDescription: An out-of-bounds read was addressed with improved input\nvalidation. \nCVE-2021-30708: Mickey Jin (@patch1t) \u0026 Junzhi Lu (@pwn0rz) of Trend\nMicro\n\nModel I/O\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted USD file may disclose memory\ncontents\nDescription: This issue was addressed with improved checks. \nCVE-2021-30709: Mickey Jin (@patch1t) of Trend Micro\n\nNSOpenPanel\nAvailable for: macOS Big Sur\nImpact: An application may be able to gain elevated privileges\nDescription: This issue was addressed by removing the vulnerable\ncode. \nCVE-2021-30679: Gabe Kirkpatrick (@gabe_k)\n\nOpenLDAP\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause a denial of service\nDescription: This issue was addressed with improved checks. \nCVE-2020-36226\nCVE-2020-36227\nCVE-2020-36223\nCVE-2020-36224\nCVE-2020-36225\nCVE-2020-36221\nCVE-2020-36228\nCVE-2020-36222\nCVE-2020-36230\nCVE-2020-36229\n\nPackageKit\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to overwrite arbitrary\nfiles\nDescription: An issue with path validation logic for hardlinks was\naddressed with improved path sanitization. \nCVE-2021-30738: Qingyang Chen of Topsec Alpha Team and Csaba Fitzl\n(@theevilbit) of Offensive Security\n\nSecurity\nAvailable for: macOS Big Sur\nImpact: Processing a maliciously crafted certificate may lead to\narbitrary code execution\nDescription: A memory corruption issue in the ASN.1 decoder was\naddressed by removing the vulnerable code. \nCVE-2021-30737: xerub\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nperform denial of service\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30716: Aleksandar Nikolic of Cisco Talos\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nexecute arbitrary code\nDescription: A memory corruption issue was addressed with improved\nstate management. \nCVE-2021-30717: Aleksandar Nikolic of Cisco Talos\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nleak sensitive user information\nDescription: A path handling issue was addressed with improved\nvalidation. \nCVE-2021-30721: Aleksandar Nikolic of Cisco Talos\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: An attacker in a privileged network position may be able to\nleak sensitive user information\nDescription: An information disclosure issue was addressed with\nimproved state management. \nCVE-2021-30722: Aleksandar Nikolic of Cisco Talos\n\nsmbx\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause unexpected application\ntermination or arbitrary code execution\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30712: Aleksandar Nikolic of Cisco Talos\n\nSoftware Update\nAvailable for: macOS Big Sur\nImpact: A person with physical access to a Mac may be able to bypass\nLogin Window during a software update\nDescription: This issue was addressed with improved checks. \nCVE-2021-30668: Syrus Kimiagar and Danilo Paffi Monteiro\n\nSoftwareUpdate\nAvailable for: macOS Big Sur\nImpact: A non-privileged user may be able to modify restricted\nsettings\nDescription: This issue was addressed with improved checks. \nCVE-2021-30718: SiQian Wei of ByteDance Security\n\nTCC\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to send unauthorized\nApple events to Finder\nDescription: A validation issue was addressed with improved logic. \nCVE-2021-30671: Ryan Bell (@iRyanBell)\n\nTCC\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to bypass Privacy\npreferences. Apple is aware of a report that this issue may have been\nactively exploited. \nDescription: A permissions issue was addressed with improved\nvalidation. \nCVE-2021-30713: an anonymous researcher\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A cross-origin issue with iframe elements was addressed\nwith improved tracking of security origins. \nCVE-2021-30744: Dan Hite of jsontop\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: A use after free issue was addressed with improved\nmemory management. \nCVE-2021-21779: Marcin Towalski of Cisco Talos\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: A malicious application may be able to leak sensitive user\ninformation\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30682: an anonymous researcher and 1lastBr3ath\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\nuniversal cross site scripting\nDescription: A logic issue was addressed with improved state\nmanagement. \nCVE-2021-30689: an anonymous researcher\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: Processing maliciously crafted web content may lead to\narbitrary code execution\nDescription: Multiple memory corruption issues were addressed with\nimproved memory handling. \nCVE-2021-30749: an anonymous researcher and mipu94 of SEFCOM lab,\nASU. working with Trend Micro Zero Day Initiative\nCVE-2021-30734: Jack Dates of RET2 Systems, Inc. (@ret2systems)\nworking with Trend Micro Zero Day Initiative\n\nWebKit\nAvailable for: macOS Big Sur\nImpact: A malicious website may be able to access restricted ports on\narbitrary servers\nDescription: A logic issue was addressed with improved restrictions. \nCVE-2021-30720: David Sch\u00fctz (@xdavidhu)\n\nWebRTC\nAvailable for: macOS Big Sur\nImpact: A remote attacker may be able to cause a denial of service\nDescription: A null pointer dereference was addressed with improved\ninput validation. \nCVE-2021-23841: Tavis Ormandy of Google\nCVE-2021-30698: Tavis Ormandy of Google\n\nAdditional recognition\n\nApp Store\nWe would like to acknowledge Thijs Alkemade of Computest Research\nDivision for their assistance. \n\nCoreCapture\nWe would like to acknowledge Zuozhi Fan (@pattern_F_) of Ant-\nfinancial TianQiong Security Lab for their assistance. \n\nImageIO\nWe would like to acknowledge Jzhu working with Trend Micro Zero Day\nInitiative and an anonymous researcher for their assistance. \n\nMail Drafts\nWe would like to acknowledge Lauritz Holtmann (@_lauritz_) for their\nassistance. \n\nWebKit\nWe would like to acknowledge Chris Salls (@salls) of Makai Security\nfor their assistance. \n\nInstallation note:\n\nThis update may be obtained from the Mac App Store or\nApple\u0027s Software Downloads web site:\nhttps://support.apple.com/downloads/\n\nInformation will also be posted to the Apple Security Updates\nweb site: https://support.apple.com/kb/HT201222\n\nThis message is signed with Apple\u0027s Product Security PGP key,\nand details are available at:\nhttps://www.apple.com/support/security/pgp/\n\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCAAdFiEEbURczHs1TP07VIfuZcsbuWJ6jjAFAmCtU9AACgkQZcsbuWJ6\njjDC5g/+P0Hya9smOX6XVhxtnwe+vh2d5zOrKLBymdkvDPGw1UQoGOq08+7eu02Q\nvsManS/aP1UKNcMnbALHNFbFXv61ZjWi+71qgGGAQAe3EtYTJchBiIIyOBNIHoOJ\n8X9sOeiyFzOOKw+GyVsBMNRL9Oh678USC4qgyyO5u2+Oexehu+6N9YNdAzwZgy6o\nmuP+NlZ08s80ahRfq/6q8uKj7+Is0k5OEdxpWTnJOoXUDzZPj4Vo7H0HL6zjuqg3\nCurJQABF3kDBWgZCvroMU6/HpbilGPE+JUFV7HPfaMe6iE3FsfrOq101w+/ovuNM\nhJ3yk/QENoh5BYdHKJo7zPVZBteGX20EVPdWfTsnz6a/hk568A+ICiupFIqwEuQv\nesIBWzgab9YUb2fAaZ071Z+lSn0Rj7tm3V/rhdwq19tYD3Q7BqEJ+YxYCH2zvyIB\nmP4/NoMpsDiTqFradR8Skac5uwINpZzAHjFyWLj0QVWVMxyQB8EGshR16YPkMryJ\nrjGyNIqZPcZ/Z6KJqpvNJrfI+b0oeqFMBUwpwK/7aQFPP/MvsM+UVSySipRiqwoa\nWAHMuY4SQwcseok7N6Rf+zAEYm9Nc+YglYpTW2taw6g0vWNIuCbyzPdC/Srrjw98\nod2jLahPwyoBg6WBvXoZ6H4YOWFAywf225nYk3l5ATsG6rNbhYk=\n=Avma\n-----END PGP SIGNATURE-----\n\n\n. Bugs fixed (https://bugzilla.redhat.com/):\n\n1944888 - CVE-2021-21409 netty: Request smuggling via content-length header\n2004133 - CVE-2021-37136 netty-codec: Bzip2Decoder doesn\u0027t allow setting size restrictions for decompressed data\n2004135 - CVE-2021-37137 netty-codec: SnappyFrameDecoder doesn\u0027t restrict chunk length and may buffer skippable chunks in an unnecessary way\n2030932 - CVE-2021-44228 log4j-core: Remote code execution in Log4j 2.x when logs contain an attacker-controlled string value\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1971 - Applying cluster state is causing elasticsearch to hit an issue and become unusable\n\n6. 8) - aarch64, ppc64le, s390x, x86_64\n\n3. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.5 Release Notes linked from the References section. \n1965362 - In renegotiated handshake openssl sends extensions which client didn\u0027t advertise in second ClientHello [rhel-8]\n\n6. 8) - noarch\n\n3. Description:\n\nEDK (Embedded Development Kit) is a project to enable UEFI support for\nVirtual Machines. This package contains a sample 64-bit UEFI firmware for\nQEMU and KVM. \n\nThe following packages have been upgraded to a later upstream version: edk2\n(20210527gite1999b264f1f). -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Moderate: openssl security update\nAdvisory ID: RHSA-2021:3798-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2021:3798\nIssue date: 2021-10-12\nCVE Names: CVE-2021-23840 CVE-2021-23841 \n=====================================================================\n\n1. Summary:\n\nAn update for openssl is now available for Red Hat Enterprise Linux 7. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Enterprise Linux Client (v. 7) - x86_64\nRed Hat Enterprise Linux Client Optional (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode (v. 7) - x86_64\nRed Hat Enterprise Linux ComputeNode Optional (v. 7) - x86_64\nRed Hat Enterprise Linux Server (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Server Optional (v. 7) - ppc64, ppc64le, s390x, x86_64\nRed Hat Enterprise Linux Workstation (v. 7) - x86_64\nRed Hat Enterprise Linux Workstation Optional (v. 7) - x86_64\n\n3. Description:\n\nOpenSSL is a toolkit that implements the Secure Sockets Layer (SSL) and\nTransport Layer Security (TLS) protocols, as well as a full-strength\ngeneral-purpose cryptography library. \n\nSecurity Fix(es):\n\n* openssl: integer overflow in CipherUpdate (CVE-2021-23840)\n\n* openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n(CVE-2021-23841)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nFor the update to take effect, all services linked to the OpenSSL library\nmust be restarted, or the system rebooted. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1930310 - CVE-2021-23841 openssl: NULL pointer dereference in X509_issuer_and_serial_hash()\n1930324 - CVE-2021-23840 openssl: integer overflow in CipherUpdate\n\n6. Package List:\n\nRed Hat Enterprise Linux Client (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Client Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux ComputeNode Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nppc64:\nopenssl-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc64.rpm\n\nppc64le:\nopenssl-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-devel-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-libs-1.0.2k-22.el7_9.ppc64le.rpm\n\ns390x:\nopenssl-1.0.2k-22.el7_9.s390x.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm\nopenssl-devel-1.0.2k-22.el7_9.s390.rpm\nopenssl-devel-1.0.2k-22.el7_9.s390x.rpm\nopenssl-libs-1.0.2k-22.el7_9.s390.rpm\nopenssl-libs-1.0.2k-22.el7_9.s390x.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Server Optional (v. 7):\n\nppc64:\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-perl-1.0.2k-22.el7_9.ppc64.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc64.rpm\n\nppc64le:\nopenssl-debuginfo-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-perl-1.0.2k-22.el7_9.ppc64le.rpm\nopenssl-static-1.0.2k-22.el7_9.ppc64le.rpm\n\ns390x:\nopenssl-debuginfo-1.0.2k-22.el7_9.s390.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.s390x.rpm\nopenssl-perl-1.0.2k-22.el7_9.s390x.rpm\nopenssl-static-1.0.2k-22.el7_9.s390.rpm\nopenssl-static-1.0.2k-22.el7_9.s390x.rpm\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nopenssl-1.0.2k-22.el7_9.src.rpm\n\nx86_64:\nopenssl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-devel-1.0.2k-22.el7_9.i686.rpm\nopenssl-devel-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-libs-1.0.2k-22.el7_9.i686.rpm\nopenssl-libs-1.0.2k-22.el7_9.x86_64.rpm\n\nRed Hat Enterprise Linux Workstation Optional (v. 7):\n\nx86_64:\nopenssl-debuginfo-1.0.2k-22.el7_9.i686.rpm\nopenssl-debuginfo-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-perl-1.0.2k-22.el7_9.x86_64.rpm\nopenssl-static-1.0.2k-22.el7_9.i686.rpm\nopenssl-static-1.0.2k-22.el7_9.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23840\nhttps://access.redhat.com/security/cve/CVE-2021-23841\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2021 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYWWqjtzjgjWX9erEAQj4lg/+IFxqmMQqLSvyz8cKUAPgss/+/wFMpRgh\nZZxYBQQ0cBFfWFlROVLaRdeiGcZYkyJCRDqy2Yb8YO1A4PnSOc+htLFYmSmU2kcm\nQLHinOzGEZo/44vN7Qsl4WhJkJIdlysCwKpkkOCUprMEnhlWMvja2eSSG9JLH16d\nRqGe4AsJQLKSKLgmhejCOqxb9am+t9zBW0zaZHP4UR52Ju1rG5rLjBJ85Gcrmp2B\nvp/GVEQ/Asid4MZA2WTx+s6wj5Dt7JOdLWrUbcYAC0I8oPWbAoZJTfPkM7S6Xv+U\n68iruVFTh74IkCbQ+SNLoYjiDAVJqtAVRVBha7Fd3/gWR6aJLLaqluLRGvd0mwXY\npohCS0ynuMQ9wtYOJ3ezSVcBN+/d9Hs/3s8RWQTzrNG6jtBe57H9/tNkeSVFSVvu\nPMKXsUoOrIUE2HCflJytDB9wkQmsWxiZoH/xVlrtD0D11egZ4EWjJL6x+xtCTAkT\nu67CAwsCKxxCeNmz42uBtXSwFXoUapJnsviGzAx247T2pyuXlYMYHlsOy7CtBvIk\njEEosCMM72UyXO4XsYTXc0jM3ze6iQTcF9irwhy+X+rTB4IXBubdUEoT0jnKlwfI\nBQvoPEBlcG+f0VU8BL+FCOosvM0ZqC7KGGOwJLoG1Vqz8rbtmhpcmNAOvzUiHdm3\nT4OjSl1NzQQ=\n=Taj2\n-----END PGP SIGNATURE-----\n\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.1.12 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \n\nContainer updates:\n\n* RHACM 2.1.12 images (BZ# 2007489)\n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007489 - RHACM 2.1.12 images\n2010991 - CVE-2021-32687 redis: Integer overflow issue with intsets\n2011000 - CVE-2021-32675 redis: Denial of service via Redis Standard Protocol (RESP) request\n2011001 - CVE-2021-32672 redis: Out of bounds read in lua debugger protocol parser\n2011004 - CVE-2021-32628 redis: Integer overflow bug in the ziplist data structure\n2011010 - CVE-2021-32627 redis: Integer overflow issue with Streams\n2011017 - CVE-2021-32626 redis: Lua scripts can overflow the heap-based Lua stack\n2011020 - CVE-2021-41099 redis: Integer overflow issue with strings\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1963232 - CVE-2021-33194 golang: x/net/html: infinite loop in ParseFragment\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nLOG-1168 - Disable hostname verification in syslog TLS settings\nLOG-1235 - Using HTTPS without a secret does not translate into the correct \u0027scheme\u0027 value in Fluentd\nLOG-1375 - ssl_ca_cert should be optional\nLOG-1378 - CLO should support sasl_plaintext(Password over http)\nLOG-1392 - In fluentd config, flush_interval can\u0027t be set with flush_mode=immediate\nLOG-1494 - Syslog output is serializing json incorrectly\nLOG-1555 - Fluentd logs emit transaction failed: error_class=NoMethodError while forwarding to external syslog server\nLOG-1575 - Rejected by Elasticsearch and unexpected json-parsing\nLOG-1735 - Regression introducing flush_at_shutdown \nLOG-1774 - The collector logs should be excluded in fluent.conf\nLOG-1776 - fluentd total_limit_size sets value beyond available space\nLOG-1822 - OpenShift Alerting Rules Style-Guide Compliance\nLOG-1859 - CLO Should not error and exit early on missing ca-bundle when cluster wide proxy is not enabled\nLOG-1862 - Unsupported kafka parameters when enabled Kafka SASL\nLOG-1903 - Fix the Display of ClusterLogging type in OLM\nLOG-1911 - CLF API changes to Opt-in to multiline error detection\nLOG-1918 - Alert `FluentdNodeDown` always firing \nLOG-1939 - Opt-in multiline detection breaks cloudwatch forwarding\n\n6. \nRed Hat OpenShift Container Storage is highly scalable, production-grade\npersistent storage for stateful applications running in the Red Hat\nOpenShift Container Platform. In addition to persistent storage, Red Hat\nOpenShift Container Storage provides a multicloud data management service\nwith an S3 compatible API. \n\nBug Fix(es):\n\n* Previously, when the namespace store target was deleted, no alert was\nsent to the namespace bucket because of an issue in calculating the\nnamespace bucket health. With this update, the issue in calculating the\nnamespace bucket health is fixed and alerts are triggered as expected. \n(BZ#1993873)\n\n* Previously, the Multicloud Object Gateway (MCG) components performed\nslowly and there was a lot of pressure on the MCG components due to\nnon-optimized database queries. With this update the non-optimized\ndatabase queries are fixed which reduces the compute resources and time\ntaken for queries. Bugs fixed (https://bugzilla.redhat.com/):\n\n1993873 - [4.8.z clone] Alert NooBaaNamespaceBucketErrorState is not triggered when namespacestore\u0027s target bucket is deleted\n2006958 - CVE-2020-26301 nodejs-ssh2: Command injection by calling vulnerable method with untrusted input\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1948761 - CVE-2021-23369 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with strict:true option\n1956688 - CVE-2021-23383 nodejs-handlebars: Remote code execution when compiling untrusted compile templates with compat:true option\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2021-23841" }, { "db": "VULHUB", "id": "VHN-382524" }, { "db": "PACKETSTORM", "id": "162826" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" } ], "trust": 1.89 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-382524", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-23841", "trust": 2.1 }, { "db": "TENABLE", "id": "TNS-2021-03", "trust": 1.1 }, { "db": "TENABLE", "id": "TNS-2021-09", "trust": 1.1 }, { "db": "PULSESECURE", "id": "SA44846", "trust": 1.1 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.1 }, { "db": "PACKETSTORM", "id": "165096", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "164583", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "164889", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "165002", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162826", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "164890", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "162151", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161525", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165099", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162823", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164928", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162824", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164927", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "161459", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165129", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "162041", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-382524", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "165286", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164562", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164489", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "164967", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "PACKETSTORM", "id": "162826" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "id": "VAR-202102-1488", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-382524" } ], "trust": 0.30766129 }, "last_update_date": "2024-07-23T20:39:26.069000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-476", "trust": 1.1 }, { "problemtype": "CWE-190", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.1, "url": "https://kb.pulsesecure.net/articles/pulse_security_advisories/sa44846" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20210219-0009/" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20210513-0002/" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht212528" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht212529" }, { "trust": 1.1, "url": "https://support.apple.com/kb/ht212534" }, { "trust": 1.1, "url": "https://www.openssl.org/news/secadv/20210216.txt" }, { "trust": 1.1, "url": "https://www.tenable.com/security/tns-2021-03" }, { "trust": 1.1, "url": "https://www.tenable.com/security/tns-2021-09" }, { "trust": 1.1, "url": "https://www.debian.org/security/2021/dsa-4855" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2021/may/67" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2021/may/70" }, { "trust": 1.1, "url": "http://seclists.org/fulldisclosure/2021/may/68" }, { "trust": 1.1, "url": "https://security.gentoo.org/glsa/202103-03" }, { "trust": 1.1, "url": "https://www.oracle.com//security-alerts/cpujul2021.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuapr2021.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpuoct2021.html" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=122a19ab48091c657f7cb1fb3af9fc07bd557bbf" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=8252ee4d90f3f2004d3d0aeeed003ad49c9a7807" }, { "trust": 1.0, "url": "https://security.netapp.com/advisory/ntap-20240621-0006/" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-23841" }, { "trust": 0.9, "url": "https://access.redhat.com/security/cve/cve-2021-23840" }, { "trust": 0.9, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.9, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.9, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23841" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23840" }, { "trust": 0.6, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16135" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3200" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-5827" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-27645" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33574" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-13435" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-5827" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-24370" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-13751" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-19603" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-35942" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-17594" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24370" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3572" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-12762" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-36086" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3778" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13750" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-13751" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22898" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-12762" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-16135" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-36084" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3800" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17594" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-36087" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3445" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13435" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-19603" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22925" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-18218" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-20232" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-20266" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-20838" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-22876" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-20231" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2020-14155" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-20838" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-36085" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-33560" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-17595" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14155" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-28153" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-13750" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3426" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2019-18218" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3580" }, { "trust": 0.4, "url": "https://access.redhat.com/security/cve/cve-2021-3796" }, { "trust": 0.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-17595" }, { "trust": 0.4, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-20673" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-20673" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-42574" }, { "trust": 0.3, "url": "https://issues.jboss.org/):" }, { "trust": 0.3, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25013" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35522" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35524" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25013" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25009" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-14145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25014" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14145" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35521" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-17541" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36331" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-31535" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36330" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-36332" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25010" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-17541" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25014" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3481" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25009" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25010" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-35523" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.5_release_notes/" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22922" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-36222" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32626" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32687" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22543" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37750" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32626" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41099" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22923" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32675" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3656" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3653" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3656" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22543" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22924" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37750" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22922" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-4658" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22924" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32675" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2016-4658" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41099" }, { "trust": 0.2, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_mana" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3653" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32627" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32687" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37576" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32628" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32672" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-36222" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32627" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32672" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-22923" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-32628" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-37576" }, { "trust": 0.2, "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22876" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-27645" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20231" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22925" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22898" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20266" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20232" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=122a19ab48091c657f7cb1fb3af9fc07bd557bbf" }, { "trust": 0.1, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=8252ee4d90f3f2004d3d0aeeed003ad49c9a7807" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36228" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21779" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30684" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30671" }, { "trust": 0.1, "url": "https://support.apple.com/downloads/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30682" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30669" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30685" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36221" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36225" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30676" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36226" }, { "trust": 0.1, "url": "https://www.apple.com/support/security/pgp/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36224" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36229" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36223" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30679" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30673" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30678" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30677" }, { "trust": 0.1, "url": "https://support.apple.com/kb/ht201222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36230" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30681" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30680" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36227" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30683" }, { "trust": 0.1, "url": "https://support.apple.com/ht212529." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-30668" }, { "trust": 0.1, "url": "https://access.redhat.com/security/vulnerabilities/rhsb-2021-009" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43527" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35524" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35522" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44228" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35523" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:5128" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21409" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-36330" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35521" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4424" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4198" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21670" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-25648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21670" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25741" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23017" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-25648" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21671" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3925" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-32690" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-21671" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-32690" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23017" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25741" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3798" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:3949" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23133" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3573" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26141" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-27777" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26147" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2019-14615" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36386" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24587" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26144" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33033" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20197" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3487" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-0427" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36312" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31829" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-10001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26145" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3564" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-10001" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35448" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3489" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24503" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28971" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26146" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26139" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3679" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24588" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-36158" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24504" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33194" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3348" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24503" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20284" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-29646" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0427" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2019-14615" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0129" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3635" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26143" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29368" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20194" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3659" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33200" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-29660" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26140" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3600" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-24586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20239" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-24502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3732" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28950" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4627" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31916" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4845" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-20095" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28493" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-42771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-26301" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-26301" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-28957" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-8037" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-8037" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20095" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28493" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23369" }, { "trust": 0.1, "url": "https://access.redhat.com/security/updates/classification/#low" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/logging/cluster-logging-upgrading.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23369" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28153" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23383" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2021:4032" } ], "sources": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "PACKETSTORM", "id": "162826" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-382524" }, { "db": "PACKETSTORM", "id": "162826" }, { "db": "PACKETSTORM", "id": "165286" }, { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" }, { "db": "PACKETSTORM", "id": "164967" }, { "db": "PACKETSTORM", "id": "165096" }, { "db": "PACKETSTORM", "id": "165002" }, { "db": "NVD", "id": "CVE-2021-23841" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2021-02-16T00:00:00", "db": "VULHUB", "id": "VHN-382524" }, { "date": "2021-05-26T17:50:31", "db": "PACKETSTORM", "id": "162826" }, { "date": "2021-12-15T15:20:33", "db": "PACKETSTORM", "id": "165286" }, { "date": "2021-11-10T17:13:10", "db": "PACKETSTORM", "id": "164889" }, { "date": "2021-11-10T17:13:18", "db": "PACKETSTORM", "id": "164890" }, { "date": "2021-10-20T15:45:47", "db": "PACKETSTORM", "id": "164562" }, { "date": "2021-10-13T14:47:32", "db": "PACKETSTORM", "id": "164489" }, { "date": "2021-10-21T15:31:47", "db": "PACKETSTORM", "id": "164583" }, { "date": "2021-11-15T17:25:56", "db": "PACKETSTORM", "id": "164967" }, { "date": "2021-11-29T18:12:32", "db": "PACKETSTORM", "id": "165096" }, { "date": "2021-11-17T15:25:40", "db": "PACKETSTORM", "id": "165002" }, { "date": "2021-02-16T17:15:13.377000", "db": "NVD", "id": "CVE-2021-23841" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-01-09T00:00:00", "db": "VULHUB", "id": "VHN-382524" }, { "date": "2024-06-21T19:15:17.377000", "db": "NVD", "id": "CVE-2021-23841" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Apple Security Advisory 2021-05-25-2", "sources": [ { "db": "PACKETSTORM", "id": "162826" } ], "trust": 0.1 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "overflow", "sources": [ { "db": "PACKETSTORM", "id": "164889" }, { "db": "PACKETSTORM", "id": "164890" }, { "db": "PACKETSTORM", "id": "164562" }, { "db": "PACKETSTORM", "id": "164489" }, { "db": "PACKETSTORM", "id": "164583" } ], "trust": 0.5 } }
var-202207-0378
Vulnerability from variot
A cryptographic vulnerability exists on Node.js on linux in versions of 18.x prior to 18.40.0 which allowed a default path for openssl.cnf that might be accessible under some circumstances to a non-admin user instead of /etc/ssl as was the case in versions prior to the upgrade to OpenSSL 3. Node.js Foundation of Node.js Products from multiple other vendors are vulnerable to uncontrolled search path elements.Information may be tampered with. Node.js July 7th 2022 Security Releases: Attempt to read openssl.cnf from /home/iojs/build/ upon startup. When Node.js starts on linux based systems, it attempts to read /home/iojs/build/ws/out/Release/obj.target/deps/openssl/openssl.cnf, which ordinarily doesn't exist. On some shared systems an attacker may be able create this file and therefore affect the default OpenSSL configuration for other users. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29
https://security.gentoo.org/
Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29
Synopsis
Multiple vulnerabilities have been discovered in Node.js.
Background
Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Node.js 20 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"
All Node.js 18 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"
All Node.js 16 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"
References
[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202405-29
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0378", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "18.5.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "18.0.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "node.js", "scope": null, "trust": 0.8, "vendor": "node js", "version": null }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013242" }, { "db": "NVD", "id": "CVE-2022-32222" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "18.5.0", "versionStartIncluding": "18.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-32222" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Gentoo", "sources": [ { "db": "PACKETSTORM", "id": "178512" } ], "trust": 0.1 }, "cve": "CVE-2022-32222", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.3, "baseSeverity": "MEDIUM", "confidentialityImpact": "NONE", "exploitabilityScore": 3.9, "impactScore": 1.4, "integrityImpact": "LOW", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 5.3, "baseSeverity": "Medium", "confidentialityImpact": "None", "exploitabilityScore": null, "id": "CVE-2022-32222", "impactScore": null, "integrityImpact": "Low", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-32222", "trust": 1.8, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202207-682", "trust": 0.6, "value": "MEDIUM" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013242" }, { "db": "CNNVD", "id": "CNNVD-202207-682" }, { "db": "NVD", "id": "CVE-2022-32222" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A cryptographic vulnerability exists on Node.js on linux in versions of 18.x prior to 18.40.0 which allowed a default path for openssl.cnf that might be accessible under some circumstances to a non-admin user instead of /etc/ssl as was the case in versions prior to the upgrade to OpenSSL 3. Node.js Foundation of Node.js Products from multiple other vendors are vulnerable to uncontrolled search path elements.Information may be tampered with. Node.js July 7th 2022 Security Releases: Attempt to read openssl.cnf from /home/iojs/build/ upon startup. When Node.js starts on linux based systems, it attempts to read /home/iojs/build/ws/out/Release/obj.target/deps/openssl/openssl.cnf, which ordinarily doesn\u0027t exist. On some shared systems an attacker may be able create this file and therefore affect the default OpenSSL configuration for other users. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: Node.js: Multiple Vulnerabilities\n Date: May 08, 2024\n Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2022-32222" }, { "db": "JVNDB", "id": "JVNDB-2022-013242" }, { "db": "VULMON", "id": "CVE-2022-32222" }, { "db": "PACKETSTORM", "id": "178512" } ], "trust": 1.8 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-32222", "trust": 3.4 }, { "db": "HACKERONE", "id": "1695596", "trust": 2.4 }, { "db": "JVNDB", "id": "JVNDB-2022-013242", "trust": 0.8 }, { "db": "CS-HELP", "id": "SB2022071338", "trust": 0.6 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202207-682", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-32222", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "178512", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-32222" }, { "db": "JVNDB", "id": "JVNDB-2022-013242" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-682" }, { "db": "NVD", "id": "CVE-2022-32222" } ] }, "id": "VAR-202207-0378", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-05-12T00:07:29.801000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-427", "trust": 1.0 }, { "problemtype": "Uncontrolled search path elements (CWE-427) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013242" }, { "db": "NVD", "id": "CVE-2022-32222" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.4, "url": "https://hackerone.com/reports/1695596" }, { "trust": 0.9, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222" }, { "trust": 0.7, "url": "https://nodejs.org/en/blog/vulnerability/july-2022-security-releases/" }, { "trust": 0.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 0.6, "url": "https://security.netapp.com/advisory/ntap-20220915-0001/" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2022-32222" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-32222/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071338" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959" }, { "trust": 0.1, "url": "https://security.gentoo.org/glsa/202405-29" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-32222" }, { "db": "JVNDB", "id": "JVNDB-2022-013242" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-682" }, { "db": "NVD", "id": "CVE-2022-32222" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-32222" }, { "db": "JVNDB", "id": "JVNDB-2022-013242" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-682" }, { "db": "NVD", "id": "CVE-2022-32222" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-06T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-013242" }, { "date": "2024-05-09T15:46:44", "db": "PACKETSTORM", "id": "178512" }, { "date": "2022-07-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202207-682" }, { "date": "2022-07-14T15:15:08.437000", "db": "NVD", "id": "CVE-2022-32222" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-06T08:23:00", "db": "JVNDB", "id": "JVNDB-2022-013242" }, { "date": "2023-07-25T00:00:00", "db": "CNNVD", "id": "CNNVD-202207-682" }, { "date": "2023-07-24T13:16:33.287000", "db": "NVD", "id": "CVE-2022-32222" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202207-682" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Node.js\u00a0Foundation\u00a0 of \u00a0Node.js\u00a0 Uncontrolled Search Path Element Vulnerability in Products from Other Vendors", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013242" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code problem", "sources": [ { "db": "CNNVD", "id": "CNNVD-202207-682" } ], "trust": 0.6 } }
var-202312-0207
Vulnerability from variot
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). Affected products do not properly validate the certificate of the configured UMC server. This could allow an attacker to intercept credentials that are sent to the UMC server as well as to manipulate responses, potentially allowing an attacker to escalate privileges. Siemens' SINEC INS Exists in a certificate validation vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0207", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": "1.0" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019617" }, { "db": "NVD", "id": "CVE-2023-48427" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2_update_1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48427" } ] }, "cve": "CVE-2023-48427", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "productcert@siemens.com", "availabilityImpact": "HIGH", "baseScore": 8.1, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.2, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 9.8, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2023-48427", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2023-48427", "trust": 1.8, "value": "CRITICAL" }, { "author": "productcert@siemens.com", "id": "CVE-2023-48427", "trust": 1.0, "value": "HIGH" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019617" }, { "db": "NVD", "id": "CVE-2023-48427" }, { "db": "NVD", "id": "CVE-2023-48427" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). Affected products do not properly validate the certificate of the configured UMC server. This could allow an attacker to intercept credentials that are sent to the UMC server as well as to manipulate responses, potentially allowing an attacker to escalate privileges. Siemens\u0027 SINEC INS Exists in a certificate validation vulnerability.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state", "sources": [ { "db": "NVD", "id": "CVE-2023-48427" }, { "db": "JVNDB", "id": "JVNDB-2023-019617" } ], "trust": 1.62 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2023-48427", "trust": 2.6 }, { "db": "SIEMENS", "id": "SSA-077170", "trust": 1.8 }, { "db": "ICS CERT", "id": "ICSA-23-348-16", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU98271228", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2023-019617", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019617" }, { "db": "NVD", "id": "CVE-2023-48427" } ] }, "id": "VAR-202312-0207", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-01-18T20:58:59.232000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-295", "trust": 1.0 }, { "problemtype": "Illegal certificate verification (CWE-295) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019617" }, { "db": "NVD", "id": "CVE-2023-48427" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu98271228/" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-48427" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-348-16" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019617" }, { "db": "NVD", "id": "CVE-2023-48427" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "JVNDB", "id": "JVNDB-2023-019617" }, { "db": "NVD", "id": "CVE-2023-48427" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2024-01-15T00:00:00", "db": "JVNDB", "id": "JVNDB-2023-019617" }, { "date": "2023-12-12T12:15:14.677000", "db": "NVD", "id": "CVE-2023-48427" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2024-01-15T02:20:00", "db": "JVNDB", "id": "JVNDB-2023-019617" }, { "date": "2023-12-14T20:07:17.240000", "db": "NVD", "id": "CVE-2023-48427" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Siemens\u0027 \u00a0SINEC\u00a0INS\u00a0 Certificate validation vulnerabilities in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019617" } ], "trust": 0.8 } }
var-202207-0381
Vulnerability from variot
A OS Command Injection vulnerability exists in Node.js versions <14.20.0, <16.20.0, <18.5.0 due to an insufficient IsAllowedHost check that can easily be bypassed because IsIPAddress does not properly check if an IP address is invalid before making DBS requests allowing rebinding attacks. Node.js Foundation of Node.js For products from other vendors, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Node.js July 7th 2022 Security Releases: DNS rebinding in --inspect via invalid IP addresses. When an invalid IPv4 address is provided (for instance 10.0.2.555 is provided), browsers (such as Firefox) will make DNS requests to the DNS server, providing a vector for an attacker-controlled DNS server or a MITM who can spoof DNS responses to perform a rebinding attack and hence connect to the WebSocket debugger, allowing for arbitrary code execution. This is a bypass of CVE-2021-22884. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update Advisory ID: RHSA-2022:6389-01 Product: Red Hat Software Collections Advisory URL: https://access.redhat.com/errata/RHSA-2022:6389 Issue date: 2022-09-08 CVE Names: CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-33987 ==================================================================== 1. Summary:
An update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now available for Red Hat Software Collections.
Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64 Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64
- Description:
Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.
The following packages have been upgraded to a later upstream version: rh-nodejs14-nodejs (14.20.0).
Security Fix(es):
-
nodejs: DNS rebinding in --inspect via invalid IP addresses (CVE-2022-32212)
-
nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding (CVE-2022-32213)
-
nodejs: HTTP request smuggling due to improper delimiting of header fields (CVE-2022-32214)
-
nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding (CVE-2022-32215)
-
got: missing verification of requested URLs allows redirects to UNIX sockets (CVE-2022-33987)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets 2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses 2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding 2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields 2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding 2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]
- Package List:
Red Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):
Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm
noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm
ppc64le: rh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm
s390x: rh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm
x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm
Red Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):
Source: rh-nodejs14-nodejs-14.20.0-2.el7.src.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm
noarch: rh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm rh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm
x86_64: rh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm rh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm rh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- References:
https://access.redhat.com/security/cve/CVE-2022-32212 https://access.redhat.com/security/cve/CVE-2022-32213 https://access.redhat.com/security/cve/CVE-2022-32214 https://access.redhat.com/security/cve/CVE-2022-32215 https://access.redhat.com/security/cve/CVE-2022-33987 https://access.redhat.com/security/updates/classification/#moderate
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/ ODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm VScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ bAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF IPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq +62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM 4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M 3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91 BYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI nBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX bcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz hGdWoRKL34w\xcePC -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 9) - aarch64, noarch, ppc64le, s390x, x86_64
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-5326-1 security@debian.org https://www.debian.org/security/ Aron Xu January 24, 2023 https://www.debian.org/security/faq
Package : nodejs CVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215 CVE-2022-35255 CVE-2022-35256 CVE-2022-43548
Multiple vulnerabilities were discovered in Node.js, which could result in HTTP request smuggling, bypass of host IP address validation and weak randomness setup.
For the stable distribution (bullseye), these problems have been fixed in version 12.22.12~dfsg-1~deb11u3.
We recommend that you upgrade your nodejs packages.
For the detailed security status of nodejs please refer to its security tracker page at: https://security-tracker.debian.org/tracker/nodejs
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8 TjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp WblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd Txb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW xbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9 0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf EtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2 idXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w Y9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7 u0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu boP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH ujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\xfeRn -----END PGP SIGNATURE----- . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202405-29
https://security.gentoo.org/
Severity: Low Title: Node.js: Multiple Vulnerabilities Date: May 08, 2024 Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614 ID: 202405-29
Synopsis
Multiple vulnerabilities have been discovered in Node.js.
Background
Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine.
Affected packages
Package Vulnerable Unaffected
net-libs/nodejs < 16.20.2 >= 16.20.2
Description
Multiple vulnerabilities have been discovered in Node.js. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All Node.js 20 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-20.5.1"
All Node.js 18 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-18.17.1"
All Node.js 16 users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=net-libs/nodejs-16.20.2"
References
[ 1 ] CVE-2020-7774 https://nvd.nist.gov/vuln/detail/CVE-2020-7774 [ 2 ] CVE-2021-3672 https://nvd.nist.gov/vuln/detail/CVE-2021-3672 [ 3 ] CVE-2021-22883 https://nvd.nist.gov/vuln/detail/CVE-2021-22883 [ 4 ] CVE-2021-22884 https://nvd.nist.gov/vuln/detail/CVE-2021-22884 [ 5 ] CVE-2021-22918 https://nvd.nist.gov/vuln/detail/CVE-2021-22918 [ 6 ] CVE-2021-22930 https://nvd.nist.gov/vuln/detail/CVE-2021-22930 [ 7 ] CVE-2021-22931 https://nvd.nist.gov/vuln/detail/CVE-2021-22931 [ 8 ] CVE-2021-22939 https://nvd.nist.gov/vuln/detail/CVE-2021-22939 [ 9 ] CVE-2021-22940 https://nvd.nist.gov/vuln/detail/CVE-2021-22940 [ 10 ] CVE-2021-22959 https://nvd.nist.gov/vuln/detail/CVE-2021-22959 [ 11 ] CVE-2021-22960 https://nvd.nist.gov/vuln/detail/CVE-2021-22960 [ 12 ] CVE-2021-37701 https://nvd.nist.gov/vuln/detail/CVE-2021-37701 [ 13 ] CVE-2021-37712 https://nvd.nist.gov/vuln/detail/CVE-2021-37712 [ 14 ] CVE-2021-39134 https://nvd.nist.gov/vuln/detail/CVE-2021-39134 [ 15 ] CVE-2021-39135 https://nvd.nist.gov/vuln/detail/CVE-2021-39135 [ 16 ] CVE-2021-44531 https://nvd.nist.gov/vuln/detail/CVE-2021-44531 [ 17 ] CVE-2021-44532 https://nvd.nist.gov/vuln/detail/CVE-2021-44532 [ 18 ] CVE-2021-44533 https://nvd.nist.gov/vuln/detail/CVE-2021-44533 [ 19 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 20 ] CVE-2022-3602 https://nvd.nist.gov/vuln/detail/CVE-2022-3602 [ 21 ] CVE-2022-3786 https://nvd.nist.gov/vuln/detail/CVE-2022-3786 [ 22 ] CVE-2022-21824 https://nvd.nist.gov/vuln/detail/CVE-2022-21824 [ 23 ] CVE-2022-32212 https://nvd.nist.gov/vuln/detail/CVE-2022-32212 [ 24 ] CVE-2022-32213 https://nvd.nist.gov/vuln/detail/CVE-2022-32213 [ 25 ] CVE-2022-32214 https://nvd.nist.gov/vuln/detail/CVE-2022-32214 [ 26 ] CVE-2022-32215 https://nvd.nist.gov/vuln/detail/CVE-2022-32215 [ 27 ] CVE-2022-32222 https://nvd.nist.gov/vuln/detail/CVE-2022-32222 [ 28 ] CVE-2022-35255 https://nvd.nist.gov/vuln/detail/CVE-2022-35255 [ 29 ] CVE-2022-35256 https://nvd.nist.gov/vuln/detail/CVE-2022-35256 [ 30 ] CVE-2022-35948 https://nvd.nist.gov/vuln/detail/CVE-2022-35948 [ 31 ] CVE-2022-35949 https://nvd.nist.gov/vuln/detail/CVE-2022-35949 [ 32 ] CVE-2022-43548 https://nvd.nist.gov/vuln/detail/CVE-2022-43548 [ 33 ] CVE-2023-30581 https://nvd.nist.gov/vuln/detail/CVE-2023-30581 [ 34 ] CVE-2023-30582 https://nvd.nist.gov/vuln/detail/CVE-2023-30582 [ 35 ] CVE-2023-30583 https://nvd.nist.gov/vuln/detail/CVE-2023-30583 [ 36 ] CVE-2023-30584 https://nvd.nist.gov/vuln/detail/CVE-2023-30584 [ 37 ] CVE-2023-30586 https://nvd.nist.gov/vuln/detail/CVE-2023-30586 [ 38 ] CVE-2023-30587 https://nvd.nist.gov/vuln/detail/CVE-2023-30587 [ 39 ] CVE-2023-30588 https://nvd.nist.gov/vuln/detail/CVE-2023-30588 [ 40 ] CVE-2023-30589 https://nvd.nist.gov/vuln/detail/CVE-2023-30589 [ 41 ] CVE-2023-30590 https://nvd.nist.gov/vuln/detail/CVE-2023-30590 [ 42 ] CVE-2023-32002 https://nvd.nist.gov/vuln/detail/CVE-2023-32002 [ 43 ] CVE-2023-32003 https://nvd.nist.gov/vuln/detail/CVE-2023-32003 [ 44 ] CVE-2023-32004 https://nvd.nist.gov/vuln/detail/CVE-2023-32004 [ 45 ] CVE-2023-32005 https://nvd.nist.gov/vuln/detail/CVE-2023-32005 [ 46 ] CVE-2023-32006 https://nvd.nist.gov/vuln/detail/CVE-2023-32006 [ 47 ] CVE-2023-32558 https://nvd.nist.gov/vuln/detail/CVE-2023-32558 [ 48 ] CVE-2023-32559 https://nvd.nist.gov/vuln/detail/CVE-2023-32559
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202405-29
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2024 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202207-0381", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.0.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "14.15.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.0.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "14.14.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "node.js", "scope": "lte", "trust": 1.0, "vendor": "nodejs", "version": "16.12.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "37" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "14.20.1" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "16.13.0" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "16.17.1" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "36" }, { "model": "fedora", "scope": "eq", "trust": 1.0, "vendor": "fedoraproject", "version": "35" }, { "model": "node.js", "scope": "lt", "trust": 1.0, "vendor": "nodejs", "version": "18.5.0" }, { "model": "node.js", "scope": "gte", "trust": 1.0, "vendor": "nodejs", "version": "18.0.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "fedora", "scope": null, "trust": 0.8, "vendor": "fedora", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null }, { "model": "node.js", "scope": null, "trust": 0.8, "vendor": "node js", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013369" }, { "db": "NVD", "id": "CVE-2022-32212" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndExcluding": "18.5.0", "versionStartIncluding": "18.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "14.14.0", "versionStartIncluding": "14.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:-:*:*:*", "cpe_name": [], "versionEndIncluding": "16.12.0", "versionStartIncluding": "16.0.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "14.20.1", "versionStartIncluding": "14.15.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:nodejs:node.js:*:*:*:*:lts:*:*:*", "cpe_name": [], "versionEndExcluding": "16.17.1", "versionStartIncluding": "16.13.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:35:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:36:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:fedoraproject:fedora:37:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-32212" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" } ], "trust": 0.5 }, "cve": "CVE-2022-32212", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 8.1, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.2, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "High", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 8.1, "baseSeverity": "High", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2022-32212", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-32212", "trust": 1.8, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202207-684", "trust": 0.6, "value": "HIGH" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013369" }, { "db": "CNNVD", "id": "CNNVD-202207-684" }, { "db": "NVD", "id": "CVE-2022-32212" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A OS Command Injection vulnerability exists in Node.js versions \u003c14.20.0, \u003c16.20.0, \u003c18.5.0 due to an insufficient IsAllowedHost check that can easily be bypassed because IsIPAddress does not properly check if an IP address is invalid before making DBS requests allowing rebinding attacks. Node.js Foundation of Node.js For products from other vendors, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Node.js July 7th 2022 Security Releases: DNS rebinding in --inspect via invalid IP addresses. When an invalid IPv4 address is provided (for instance 10.0.2.555 is provided), browsers (such as Firefox) will make DNS requests to the DNS server, providing a vector for an attacker-controlled DNS server or a MITM who can spoof DNS responses to perform a rebinding attack and hence connect to the WebSocket debugger, allowing for arbitrary code execution. This is a bypass of CVE-2021-22884. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security and bug fix update\nAdvisory ID: RHSA-2022:6389-01\nProduct: Red Hat Software Collections\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6389\nIssue date: 2022-09-08\nCVE Names: CVE-2022-32212 CVE-2022-32213 CVE-2022-32214\n CVE-2022-32215 CVE-2022-33987\n====================================================================\n1. Summary:\n\nAn update for rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon is now\navailable for Red Hat Software Collections. \n\nRed Hat Product Security has rated this update as having a security impact\nof Moderate. A Common Vulnerability Scoring System (CVSS) base score, which\ngives a detailed severity rating, is available for each vulnerability from\nthe CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7) - noarch, ppc64le, s390x, x86_64\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7) - noarch, x86_64\n\n3. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nrh-nodejs14-nodejs (14.20.0). \n\nSecurity Fix(es):\n\n* nodejs: DNS rebinding in --inspect via invalid IP addresses\n(CVE-2022-32212)\n\n* nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n(CVE-2022-32213)\n\n* nodejs: HTTP request smuggling due to improper delimiting of header\nfields (CVE-2022-32214)\n\n* nodejs: HTTP request smuggling due to incorrect parsing of multi-line\nTransfer-Encoding (CVE-2022-32215)\n\n* got: missing verification of requested URLs allows redirects to UNIX\nsockets (CVE-2022-33987)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* rh-nodejs14-nodejs: rebase to latest upstream release (BZ#2106673)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n2102001 - CVE-2022-33987 got: missing verification of requested URLs allows redirects to UNIX sockets\n2105422 - CVE-2022-32212 nodejs: DNS rebinding in --inspect via invalid IP addresses\n2105426 - CVE-2022-32215 nodejs: HTTP request smuggling due to incorrect parsing of multi-line Transfer-Encoding\n2105428 - CVE-2022-32214 nodejs: HTTP request smuggling due to improper delimiting of header fields\n2105430 - CVE-2022-32213 nodejs: HTTP request smuggling due to flawed parsing of Transfer-Encoding\n2106673 - rh-nodejs14-nodejs: rebase to latest upstream release [rhscl-3.8.z]\n\n6. Package List:\n\nRed Hat Software Collections for Red Hat Enterprise Linux Server (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nppc64le:\nrh-nodejs14-nodejs-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.ppc64le.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.ppc64le.rpm\n\ns390x:\nrh-nodejs14-nodejs-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.s390x.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.s390x.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nRed Hat Software Collections for Red Hat Enterprise Linux Workstation (v. 7):\n\nSource:\nrh-nodejs14-nodejs-14.20.0-2.el7.src.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.src.rpm\n\nnoarch:\nrh-nodejs14-nodejs-docs-14.20.0-2.el7.noarch.rpm\nrh-nodejs14-nodejs-nodemon-2.0.19-1.el7.noarch.rpm\n\nx86_64:\nrh-nodejs14-nodejs-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-debuginfo-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-nodejs-devel-14.20.0-2.el7.x86_64.rpm\nrh-nodejs14-npm-6.14.17-14.20.0.2.el7.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2022-32212\nhttps://access.redhat.com/security/cve/CVE-2022-32213\nhttps://access.redhat.com/security/cve/CVE-2022-32214\nhttps://access.redhat.com/security/cve/CVE-2022-32215\nhttps://access.redhat.com/security/cve/CVE-2022-33987\nhttps://access.redhat.com/security/updates/classification/#moderate\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYxnqU9zjgjWX9erEAQipBg/+NJmkBsKEPkFHZAiZhGKiwIkwaFcHK+e/\nODClFTTT9SkkMBheuc9HQDmwukaVlLMvbOJSVL/6NvuLQvOcQHtprOAJXr3I6KQm\nVScJRQny4et+D/N3bJJiuhqe9YY9Bh+EP7omS4aq2UuphEhkuTSQ0V2+Fa4O8wdZ\nbAhUhU660Q6aGzNGvcyz8vi7ohmOFZS94/x2Lr6cBG8LF0dmr/pIw+uPlO36ghXF\nIPEM3VcGisTGQRg2Xy5yqeouK1S+YAcZ1f0QUOePP+WRhIecfmG3cj6oYTRnrOyq\n+62525BHDNjIz55z6H32dKBIy+r+HT7WaOGgPwvH+ugmlH6NyKHjSyy+IJoglkfM\n4+QA0zun7WhLet5y4jmsWCpT3mOCWj7h+iW6IqTlfcad3wCQ6OnySRq67W3GDq+M\n3kdUdBoyfLm1vzLceEF4AK8qChj7rVl8x0b4v8OfRGv6ZEIe+BfJYNzI9HeuIE91\nBYtLGe18vMs5mcWxcYMWlfAgzVSGTaqaaBie9qPtAThs00lJd9oRf/Mfga42/6vI\nnBLHwE3NyPyKfaLvcyLa/oPwGnOhKyPtD8HeN2MORm6RUeUClaq9s+ihDIPvbyLX\nbcKKdjGoJDWyJy2yU2GkVwrbF6gcKgdvo2uFckOpouKQ4P9KEooI/15fLy8NPIZz\nhGdWoRKL34w\\xcePC\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 9) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5326-1 security@debian.org\nhttps://www.debian.org/security/ Aron Xu\nJanuary 24, 2023 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : nodejs\nCVE ID : CVE-2022-32212 CVE-2022-32213 CVE-2022-32214 CVE-2022-32215\n CVE-2022-35255 CVE-2022-35256 CVE-2022-43548\n\nMultiple vulnerabilities were discovered in Node.js, which could result\nin HTTP request smuggling, bypass of host IP address validation and weak\nrandomness setup. \n\nFor the stable distribution (bullseye), these problems have been fixed in\nversion 12.22.12~dfsg-1~deb11u3. \n\nWe recommend that you upgrade your nodejs packages. \n\nFor the detailed security status of nodejs please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/nodejs\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQIzBAEBCgAdFiEEtuYvPRKsOElcDakFEMKTtsN8TjYFAmPQNhIACgkQEMKTtsN8\nTjaRmA/+KDFkQcd2sE/eAAx9cVikICNkfu7uIVKHpeDH9o5oq5M2nj4zHJCeAArp\nWblguyZwEtqzAOO2WesbrmwfXLmglhrNZwRMOrsbu63JxSnecp7qcMwR8A4JWdmd\nTxb4aZr6Prmwq6fT0G3K6oV8Hw+OeqYA/RZKenxtkBf/jdzVahGJHJ/NrFKKWVQW\nxbqHwCkP7uUlm+5UR5XzNrodTRCQYHJvUmDUrjEOjM6x+sqYirKWiERN0A14kVn9\n0Ufrw6+Z2tKhdKFZfU1BtDthhlH/nybz0h3aHsk+E5/vx20WAURiCEDVi7nf8+Rf\nEtbCxaqV+/xVoPmXStHY/ogCo8CgRVsyYUIemgi4q5LwVx/Oqjm2CJ/xCwOKh0E2\nidXLJfLSpxxBe598MUn9iKbnFFCN9DQZXf7BYs3djtn8ALFVBSHZSF1QXFoFQ86w\nY9xGhBQzfEgCoEW7H4S30ZQ+Gz+ZnOMCSH+MKIMtSpqbc7wLtrKf839DO6Uux7B7\nu0WR3lZlsihi92QKq9X/VRkyy8ZiA2TYy3IE+KDKlXDHKls9FR9BUClYe9L8RiRu\nboP8KPFUHUsSVaTzkufMStdKkcXCqgj/6KhJL6E9ZunTBpTmqx1Ty7/N2qktLFnH\nujrffzV3rCE6eIg7ps8OdZbjCfqUqmQk9/pV6ZDjymqjZ1LKZDs\\xfeRn\n-----END PGP SIGNATURE-----\n. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202405-29\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Low\n Title: Node.js: Multiple Vulnerabilities\n Date: May 08, 2024\n Bugs: #772422, #781704, #800986, #805053, #807775, #811273, #817938, #831037, #835615, #857111, #865627, #872692, #879617, #918086, #918614\n ID: 202405-29\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n=======\nMultiple vulnerabilities have been discovered in Node.js. \n\nBackground\n=========\nNode.js is a JavaScript runtime built on Chrome\u2019s V8 JavaScript engine. \n\nAffected packages\n================\nPackage Vulnerable Unaffected\n--------------- ------------ ------------\nnet-libs/nodejs \u003c 16.20.2 \u003e= 16.20.2\n\nDescription\n==========\nMultiple vulnerabilities have been discovered in Node.js. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n=====\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n=========\nThere is no known workaround at this time. \n\nResolution\n=========\nAll Node.js 20 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-20.5.1\"\n\nAll Node.js 18 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-18.17.1\"\n\nAll Node.js 16 users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=net-libs/nodejs-16.20.2\"\n\nReferences\n=========\n[ 1 ] CVE-2020-7774\n https://nvd.nist.gov/vuln/detail/CVE-2020-7774\n[ 2 ] CVE-2021-3672\n https://nvd.nist.gov/vuln/detail/CVE-2021-3672\n[ 3 ] CVE-2021-22883\n https://nvd.nist.gov/vuln/detail/CVE-2021-22883\n[ 4 ] CVE-2021-22884\n https://nvd.nist.gov/vuln/detail/CVE-2021-22884\n[ 5 ] CVE-2021-22918\n https://nvd.nist.gov/vuln/detail/CVE-2021-22918\n[ 6 ] CVE-2021-22930\n https://nvd.nist.gov/vuln/detail/CVE-2021-22930\n[ 7 ] CVE-2021-22931\n https://nvd.nist.gov/vuln/detail/CVE-2021-22931\n[ 8 ] CVE-2021-22939\n https://nvd.nist.gov/vuln/detail/CVE-2021-22939\n[ 9 ] CVE-2021-22940\n https://nvd.nist.gov/vuln/detail/CVE-2021-22940\n[ 10 ] CVE-2021-22959\n https://nvd.nist.gov/vuln/detail/CVE-2021-22959\n[ 11 ] CVE-2021-22960\n https://nvd.nist.gov/vuln/detail/CVE-2021-22960\n[ 12 ] CVE-2021-37701\n https://nvd.nist.gov/vuln/detail/CVE-2021-37701\n[ 13 ] CVE-2021-37712\n https://nvd.nist.gov/vuln/detail/CVE-2021-37712\n[ 14 ] CVE-2021-39134\n https://nvd.nist.gov/vuln/detail/CVE-2021-39134\n[ 15 ] CVE-2021-39135\n https://nvd.nist.gov/vuln/detail/CVE-2021-39135\n[ 16 ] CVE-2021-44531\n https://nvd.nist.gov/vuln/detail/CVE-2021-44531\n[ 17 ] CVE-2021-44532\n https://nvd.nist.gov/vuln/detail/CVE-2021-44532\n[ 18 ] CVE-2021-44533\n https://nvd.nist.gov/vuln/detail/CVE-2021-44533\n[ 19 ] CVE-2022-0778\n https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 20 ] CVE-2022-3602\n https://nvd.nist.gov/vuln/detail/CVE-2022-3602\n[ 21 ] CVE-2022-3786\n https://nvd.nist.gov/vuln/detail/CVE-2022-3786\n[ 22 ] CVE-2022-21824\n https://nvd.nist.gov/vuln/detail/CVE-2022-21824\n[ 23 ] CVE-2022-32212\n https://nvd.nist.gov/vuln/detail/CVE-2022-32212\n[ 24 ] CVE-2022-32213\n https://nvd.nist.gov/vuln/detail/CVE-2022-32213\n[ 25 ] CVE-2022-32214\n https://nvd.nist.gov/vuln/detail/CVE-2022-32214\n[ 26 ] CVE-2022-32215\n https://nvd.nist.gov/vuln/detail/CVE-2022-32215\n[ 27 ] CVE-2022-32222\n https://nvd.nist.gov/vuln/detail/CVE-2022-32222\n[ 28 ] CVE-2022-35255\n https://nvd.nist.gov/vuln/detail/CVE-2022-35255\n[ 29 ] CVE-2022-35256\n https://nvd.nist.gov/vuln/detail/CVE-2022-35256\n[ 30 ] CVE-2022-35948\n https://nvd.nist.gov/vuln/detail/CVE-2022-35948\n[ 31 ] CVE-2022-35949\n https://nvd.nist.gov/vuln/detail/CVE-2022-35949\n[ 32 ] CVE-2022-43548\n https://nvd.nist.gov/vuln/detail/CVE-2022-43548\n[ 33 ] CVE-2023-30581\n https://nvd.nist.gov/vuln/detail/CVE-2023-30581\n[ 34 ] CVE-2023-30582\n https://nvd.nist.gov/vuln/detail/CVE-2023-30582\n[ 35 ] CVE-2023-30583\n https://nvd.nist.gov/vuln/detail/CVE-2023-30583\n[ 36 ] CVE-2023-30584\n https://nvd.nist.gov/vuln/detail/CVE-2023-30584\n[ 37 ] CVE-2023-30586\n https://nvd.nist.gov/vuln/detail/CVE-2023-30586\n[ 38 ] CVE-2023-30587\n https://nvd.nist.gov/vuln/detail/CVE-2023-30587\n[ 39 ] CVE-2023-30588\n https://nvd.nist.gov/vuln/detail/CVE-2023-30588\n[ 40 ] CVE-2023-30589\n https://nvd.nist.gov/vuln/detail/CVE-2023-30589\n[ 41 ] CVE-2023-30590\n https://nvd.nist.gov/vuln/detail/CVE-2023-30590\n[ 42 ] CVE-2023-32002\n https://nvd.nist.gov/vuln/detail/CVE-2023-32002\n[ 43 ] CVE-2023-32003\n https://nvd.nist.gov/vuln/detail/CVE-2023-32003\n[ 44 ] CVE-2023-32004\n https://nvd.nist.gov/vuln/detail/CVE-2023-32004\n[ 45 ] CVE-2023-32005\n https://nvd.nist.gov/vuln/detail/CVE-2023-32005\n[ 46 ] CVE-2023-32006\n https://nvd.nist.gov/vuln/detail/CVE-2023-32006\n[ 47 ] CVE-2023-32558\n https://nvd.nist.gov/vuln/detail/CVE-2023-32558\n[ 48 ] CVE-2023-32559\n https://nvd.nist.gov/vuln/detail/CVE-2023-32559\n\nAvailability\n===========\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202405-29\n\nConcerns?\n========\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n======\nCopyright 2024 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n", "sources": [ { "db": "NVD", "id": "CVE-2022-32212" }, { "db": "JVNDB", "id": "JVNDB-2022-013369" }, { "db": "VULMON", "id": "CVE-2022-32212" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" } ], "trust": 2.34 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-32212", "trust": 4.0 }, { "db": "HACKERONE", "id": "1632921", "trust": 2.4 }, { "db": "JVNDB", "id": "JVNDB-2022-013369", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "168305", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "169410", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168442", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168358", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "170727", "trust": 0.7 }, { "db": "CS-HELP", "id": "SB2022072639", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071338", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022072522", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071612", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022071827", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3586", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3488", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3487", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2023.0997", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3505", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4101", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4681", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.3673", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4136", "trust": 0.6 }, { "db": "SIEMENS", "id": "SSA-332410", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202207-684", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2022-32212", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168359", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "178512", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-32212" }, { "db": "JVNDB", "id": "JVNDB-2022-013369" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-684" }, { "db": "NVD", "id": "CVE-2022-32212" } ] }, "id": "VAR-202207-0381", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-07-23T21:59:00.866000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-78", "trust": 1.0 }, { "problemtype": "OS Command injection (CWE-78) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013369" }, { "db": "NVD", "id": "CVE-2022-32212" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.4, "url": "https://hackerone.com/reports/1632921" }, { "trust": 1.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32212" }, { "trust": 1.1, "url": "https://access.redhat.com/security/cve/cve-2022-32212" }, { "trust": 0.7, "url": "https://nodejs.org/en/blog/vulnerability/july-2022-security-releases/" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32213" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32215" }, { "trust": 0.7, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32214" }, { "trust": 0.6, "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2018-7160" }, { "trust": 0.6, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/vmqk5l5sbyd47qqz67lemhnq662gh3oy/" }, { "trust": 0.6, "url": "https://www.debian.org/security/2023/dsa-5326" }, { "trust": 0.6, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/2icg6csib3guwh5dusqevx53mojw7lyk/" }, { "trust": 0.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" }, { "trust": 0.6, "url": "https://security.netapp.com/advisory/ntap-20220915-0001/" }, { "trust": 0.6, "url": "https://cve.mitre.org/cgi-bin/cvename.cgi?name=cve-2021-22884" }, { "trust": 0.6, "url": "https://lists.debian.org/debian-lts-announce/2022/10/msg00006.html" }, { "trust": 0.6, "url": "https://lists.fedoraproject.org/archives/list/package-announce@lists.fedoraproject.org/message/qcnn3yg2bcls4zekj3clsut6as7axth3/" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/170727/debian-security-advisory-5326-1.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3505" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168305/red-hat-security-advisory-2022-6389-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072522" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168442/red-hat-security-advisory-2022-6595-01.html" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168358/red-hat-security-advisory-2022-6449-01.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2023.0997" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4681" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022072639" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4101" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3673" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4136" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3487" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071827" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3586" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.3488" }, { "trust": 0.6, "url": "https://cxsecurity.com/cveshow/cve-2022-32212/" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071612" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/169410/red-hat-security-advisory-2022-6985-01.html" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022071338" }, { "trust": 0.5, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-32214" }, { "trust": 0.5, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-32213" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-33987" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-32215" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-33987" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3807" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35255" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6389" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6985" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-33502" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-29244" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6595" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-33502" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-7788" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-28469" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29244" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-28469" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7788" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6449" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6448" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/nodejs" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22960" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30587" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32006" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22931" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22939" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32558" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30588" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3672" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35949" }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22959" }, { "trust": 0.1, "url": "https://security.gentoo.org/glsa/202405-29" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32004" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30584" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-7774" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30589" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32003" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22883" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22884" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35948" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30582" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3602" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3786" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30590" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30586" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22940" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32005" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-32559" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22930" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39134" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30581" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37712" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-30583" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37701" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-32212" }, { "db": "JVNDB", "id": "JVNDB-2022-013369" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-684" }, { "db": "NVD", "id": "CVE-2022-32212" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-32212" }, { "db": "JVNDB", "id": "JVNDB-2022-013369" }, { "db": "PACKETSTORM", "id": "168305" }, { "db": "PACKETSTORM", "id": "169410" }, { "db": "PACKETSTORM", "id": "168442" }, { "db": "PACKETSTORM", "id": "168358" }, { "db": "PACKETSTORM", "id": "168359" }, { "db": "PACKETSTORM", "id": "170727" }, { "db": "PACKETSTORM", "id": "178512" }, { "db": "CNNVD", "id": "CNNVD-202207-684" }, { "db": "NVD", "id": "CVE-2022-32212" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-07T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-013369" }, { "date": "2022-09-08T14:41:32", "db": "PACKETSTORM", "id": "168305" }, { "date": "2022-10-18T22:30:49", "db": "PACKETSTORM", "id": "169410" }, { "date": "2022-09-21T13:47:04", "db": "PACKETSTORM", "id": "168442" }, { "date": "2022-09-13T15:43:41", "db": "PACKETSTORM", "id": "168358" }, { "date": "2022-09-13T15:43:55", "db": "PACKETSTORM", "id": "168359" }, { "date": "2023-01-25T16:09:12", "db": "PACKETSTORM", "id": "170727" }, { "date": "2024-05-09T15:46:44", "db": "PACKETSTORM", "id": "178512" }, { "date": "2022-07-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202207-684" }, { "date": "2022-07-14T15:15:08.237000", "db": "NVD", "id": "CVE-2022-32212" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-09-07T08:25:00", "db": "JVNDB", "id": "JVNDB-2022-013369" }, { "date": "2023-02-24T00:00:00", "db": "CNNVD", "id": "CNNVD-202207-684" }, { "date": "2023-02-23T20:15:12.057000", "db": "NVD", "id": "CVE-2022-32212" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202207-684" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Node.js\u00a0Foundation\u00a0 of \u00a0Node.js\u00a0 in products from other multiple vendors \u00a0OS\u00a0 Command injection vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-013369" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "operating system commend injection", "sources": [ { "db": "CNNVD", "id": "CNNVD-202207-684" } ], "trust": 0.6 } }
var-202201-0349
Vulnerability from variot
node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor. node-fetch Exists in an open redirect vulnerability.Information may be obtained and information may be tampered with. The purpose of this text-only errata is to inform you about the security issues fixed in this release. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update Advisory ID: RHSA-2022:6156-01 Product: RHODF Advisory URL: https://access.redhat.com/errata/RHSA-2022:6156 Issue date: 2022-08-24 CVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528 CVE-2022-0235 CVE-2022-0536 CVE-2022-0670 CVE-2022-1292 CVE-2022-1586 CVE-2022-1650 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24675 CVE-2022-24771 CVE-2022-24772 CVE-2022-24773 CVE-2022-24785 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-29526 CVE-2022-29810 CVE-2022-29824 CVE-2022-31129 ==================================================================== 1. Summary:
Updated images that include numerous enhancements, security, and bug fixes are now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
-
eventsource: Exposure of Sensitive Information (CVE-2022-1650)
-
moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
-
nodejs-set-value: type confusion allows bypass of CVE-2019-10747 (CVE-2021-23440)
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
-
prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)
-
node-forge: Signature verification leniency in checking
digestAlgorithm
structure can lead to signature forgery (CVE-2022-24771) -
node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery (CVE-2022-24772)
-
node-forge: Signature verification leniency in checking
DigestInfo
structure (CVE-2022-24773) -
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)
-
golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
These updated images include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:
https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
All Red Hat OpenShift Data Foundation users are advised to upgrade to these updated images, which provide numerous bug fixes and enhancements.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. For details on how to apply this update, refer to: https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1937117 - Deletion of StorageCluster doesn't remove ceph toolbox pod
1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified
1973317 - libceph: read_partial_message and bad crc/signature errors
1996829 - Permissions assigned to ceph auth principals when using external storage are too broad
2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747
2027724 - Warning log for rook-ceph-toolbox in ocs-operator log
2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]
2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2056697 - odf-csi-addons-operator subscription failed while using custom catalog source
2058211 - Add validation for CIDR field in DRPolicy
2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced
2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10
2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE
2063691 - [GSS] [RFE] Add termination policy to s3 route
2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint
2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression
2066514 - OCS operator to install Ceph prometheus alerts instead of Rook
2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route
2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking digestAlgorithm
structure can lead to signature forgery
2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery
2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking DigestInfo
structure
2069314 - OCS external mode should allow specifying names for all Ceph auth principals
2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster.
2069812 - must-gather: rbd_vol_and_snap_info collection is broken
2069815 - must-gather: essential rbd mirror command outputs aren't collected
2070542 - After creating a new storage system it redirects to 404 error page instead of the "StorageSystems" page for OCP 4.11
2071494 - [DR] Applications are not getting deployed
2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty
2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled
2075426 - 4.10 must gather is not available after GA of 4.10
2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in "Progressing" state although all the openshift-storage pods are up and Running
2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost
2077242 - vg-manager missing permissions
2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode
2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar
2079866 - [DR] odf-multicluster-console is in CLBO state
2079873 - csi-nfsplugin pods are not coming up after successful patch request to update "ROOK_CSI_ENABLE_NFS": "true"'
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2081680 - Add the LVM Operator into the Storage category in OperatorHub
2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation
2082078 - OBC's not getting created on primary cluster when manageds3 set as "true" for mirrorPeer
2082497 - Do not filter out removable devices
2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)
2083441 - LVM operator should deploy the volumesnapshotclass resource
2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status
2083993 - Add missing pieces for storageclassclaim
2084041 - [Console Migration] Link-able storage system name directs to blank page
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided"
2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates
2084546 - [Console Migration] Provider details absent under backing store in UI
2084565 - [Console Migration] The creation of new backing store , directs to a blank page
2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information
2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred
2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace
2086557 - Thin pool in lvm operator doesn't use all disks
2086675 - [UI]No option to "add capacity" via the Installed Operators tab
2086982 - ODF 4.11 deployment is failing
2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm
2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown
2087107 - Set default storage class if none is set
2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message
2087675 - ocs-metrics-exporter pod crashes on odf v4.11
2087732 - [Console Migration] Events page missing under new namespace store
2087755 - [Console Migration] Bucket Class details page doesn't have the complete details in UI
2088359 - Send VG Metrics even if storage is being consumed from thinPool alone
2088380 - KMS using vault on standalone MCG cluster is not enabled
2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint
2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook
2089296 - [MS v2] Storage cluster in error phase and 'ocs-provider-qe' addon installation failed with ODF 4.10.2
2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts
2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9.
2089552 - [MS v2] Cannot create StorageClassClaim
2089567 - [Console Migration] Improve the styling of Various Components
2089786 - [Console Migration] "Attach to deployment" option is missing in kebab menu for Object Bucket Claims .
2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket.
2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed
2090278 - [LVMO] Some containers are missing resource requirements and limits
2090314 - [LVMO] CSV is missing some useful annotations
2090953 - [MCO] DRCluster created under default namespace
2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics
2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool.
2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference
2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external
2091894 - All backingstores in cluster spontaneously change their own secret
2091951 - [GSS] OCS pods are restarting due to liveness probe failure
2091998 - Volume Snapshots not work with external restricted mode
2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool
2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks
2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)
2092349 - Enable zeroing on the thin-pool during creation
2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase
2092400 - [MS v2] StorageClassClaim creation is failing with error "no StorageCluster found"
2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically
2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2094179 - MCO fails to create DRClusters when replication mode is synchronous
2094853 - [Console Migration] Description under storage class drop down in add capacity is missing .
2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2095155 - Use tool black
to format the python external script
2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster
2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time
2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page
2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened
2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False
2096937 - Storage - Data Foundation: i18n misses
2097216 - Collect StorageClassClaim details in must-gather
2097287 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2097305 - Add translations for ODF 4.11
2098121 - Managed ODF not getting detected
2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled
2099581 - StorageClassClaim with encryption gets into Failed state
2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project
2099646 - Block pool list page kebab action menu is showing empty options
2099660 - OCS dashbaords not appearing unless user clicks on "Overview" Tab
2099724 - S3 secret namespace on the managed cluster doesn't match with the namespace in the s3profile
2099965 - rbd: provide option to disable setting metadata on RBD images
2100326 - [ODF to ODF] Volume snapshot creation failed
2100352 - Make lvmo pod labels more uniform
2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary
2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install
2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection
2103818 - Restored snapshot don't have any content
2104833 - Need to update configmap for IBM storage odf operator GA
2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- References:
https://access.redhat.com/security/cve/CVE-2021-23440 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0536 https://access.redhat.com/security/cve/CVE-2022-0670 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1650 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24771 https://access.redhat.com/security/cve/CVE-2022-24772 https://access.redhat.com/security/cve/CVE-2022-24773 https://access.redhat.com/security/cve/CVE-2022-24785 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-29526 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy Xo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N b3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY sJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj gz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am 2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf mX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl CSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE hTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY 3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E 2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V ERCapzwcRuQ=gV+z -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.
The following packages have been upgraded to a later upstream version: nodejs (14.21.3). Bugs fixed (https://bugzilla.redhat.com/):
2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names 2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection 2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields 2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2066009 - CVE-2021-44906 minimist: prototype pollution 2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address 2142822 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.6.0.z] 2150323 - CVE-2022-24999 express: "qs" prototype poisoning causes the hang of the node process 2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service 2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability 2168631 - CVE-2022-4904 c-ares: buffer overflow in config_sortlist() due to missing string length check 2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS 2171935 - CVE-2023-23918 Node.js: Permissions policies can be bypassed via process.mainModule 2172217 - CVE-2023-23920 Node.js: insecure loading of ICU data through ICU_DATA environment variable 2175827 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.6.0.z]
-
Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
-
========================================================================== Ubuntu Security Notice USN-6158-1 June 13, 2023
node-fetch vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS (Available with Ubuntu Pro)
Summary:
Node Fetch could be made to expose sensitive information if it opened a specially crafted file.
Software Description: - node-fetch: A light-weight module that brings the Fetch API to Node.js
Details:
It was discovered that Node Fetch incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to obtain sensitive information.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: node-fetch 1.7.3-2ubuntu0.1
Ubuntu 18.04 LTS (Available with Ubuntu Pro): node-fetch 1.7.3-1ubuntu0.1~esm1
In general, a standard system update will make all the necessary changes. Summary:
The Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console 2040693 - ?Replication repository? wizard has no validation for name length 2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com? 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace 2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. 2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade 2061335 - [MTC UI] ?Update cluster? button is not getting disabled 2062266 - MTC UI does not display logs properly [OADP-BL] 2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend 2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2076593 - Velero pod log missing from UI drop down 2076599 - Velero pod log missing from downloaded logs folder [OADP-BL] 2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan 2079252 - [MTC] Rsync options logs not visible in log-reader pod 2082221 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [UI] 2082225 - non-numeric user when launching stage pods [OADP-BL] 2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments 2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods 2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels 2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL] 2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts 2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL] 2096939 - Fix legacy operator.yml inconsistencies and errors 2100486 - [MTC UI] Target storage class field is not getting respected when clusters don't have replication repo configured. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.5.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/
Security fixes:
-
nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)
-
containerd: Unprivileged pod may bind mount any privileged regular file on disk (CVE-2021-43816)
-
minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)
-
openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates (CVE-2022-0778)
-
imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path (CVE-2022-24778)
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
nconf: Prototype pollution in memory store (CVE-2022-21803)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
-
opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
Bug fixes:
-
RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target (BZ# 2014557)
-
RHACM 2.5.0 images (BZ# 2024938)
-
[UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?) (BZ#2028348)
-
Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly (BZ# 2028647)
-
create cluster pool -> choose infra type, As a result infra providers disappear from UI. (BZ# 2033339)
-
Restore/backup shows up as Validation failed but the restore backup status in ACM shows success (BZ# 2034279)
-
Observability - OCP 311 node role are not displayed completely (BZ# 2038650)
-
Documented uninstall procedure leaves many leftovers (BZ# 2041921)
-
infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5 (BZ# 2046554)
-
Acm failed to install due to some missing CRDs in operator (BZ# 2047463)
-
Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)
-
ACM home page now includes /home/ in url (BZ# 2051299)
-
proxy heading in Add Credential should be capitalized (BZ# 2051349)
-
ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0 (BZ# 2051983)
-
Create Policy button does not work and user cannot use console to create policy (BZ# 2053264)
-
No cluster information was displayed after a policyset was created (BZ# 2053366)
-
Dynamic plugin update does not take effect in Firefox (BZ# 2053516)
-
Replicated policy should not be available when creating a Policy Set (BZ# 2054431)
-
Placement section in Policy Set wizard does not reset when users click "Back" to re-configured placement (BZ# 2054433)
-
Solution:
For Red Hat Advanced Cluster Management for Kubernetes, see the following documentation, which will be updated shortly for this release, for important instructions on installing this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing
- Bugs fixed (https://bugzilla.redhat.com/):
2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target
2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2028224 - RHACM 2.5.0 images
2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)
2028647 - Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
2033339 - create cluster pool -> choose infra type , As a result infra providers disappear from UI.
2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub
2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO
2073355 - Get blank page when click policy with unknown status in Governance -> Overview page
2073508 - Thread responsible to get insights data from ks clusters is broken
2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters
2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology
2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin
2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of "resource conflict" error
2074178 - Editing Helm Argo Applications does not Prune Old Resources
2074626 - Policy placement failure during ZTP SNO scale test
2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store
2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal
2074937 - UI allows creating cluster even when there are no ClusterImageSets
2075416 - infraEnv failed to create image after restore
2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod
2075739 - The lookup function won't check the referred resource whether exist when using template policies
2076421 - Can't select existing placement for policy or policyset when editing policy or policyset
2076494 - No policyreport CR for spoke clusters generated in the disconnected env
2076502 - The policyset card doesn't show the cluster status(violation/without violation) again after deleted one policy
2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator
2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster
2077291 - Prometheus doesn't display acm_managed_cluster_info after upgrade from 2.4 to 2.5
2077304 - Create Cluster button is disabled only if other clusters exist
2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5
2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI
2077751 - Can't create a template policy from UI when the object's name is referring Golang text template syntax in this policy
2077783 - Still show violation for clusterserviceversions after enforced "Detect Image vulnerabilities " policy template and the operator is installed
2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set
2078164 - Failed to edit a policy without placement
2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement
2078373 - Disable the hyperlink of ks node in standalone MCE environment since the search component was not exists
2078617 - Azure public credential details get pre-populated with base domain name in UI
2078952 - View pod logs in search details returns error
2078973 - Crashed pod is marked with success in Topology
2079013 - Changing existing placement rules does not change YAML file
2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM
2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on
2079494 - Hitting Enter in yaml editor caused unexpected keys "key00x:" to be created
2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5
2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page
2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted
2079615 - Edit appset placement in UI with a new placement throws error upon submitting
2079658 - Cluster Count is Incorrect in Application UI
2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower
2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters
2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2080503 - vSphere network name doesn't allow entering spaces and doesn't reflect YAML changes
2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page
2080712 - Select an existing placement configuration does not work
2080776 - Unrecognized characters are displayed on policy and policy set yaml editors
2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster
2081810 - Type '-' character in Name field caused previously typed character backspaced in in the name field of policy wizard
2081829 - Application deployed on local cluster's topology is crashing after upgrade
2081938 - The deleted policy still be shown on the policyset review page when edit this policy set
2082226 - Object Storage Topology includes residue of resources after Upgrade
2082409 - Policy set details panel remains even after the policy set has been deleted
2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets
2083038 - Warning still refers to the klusterlet-addon-appmgr
pod rather than the application-manager
pod
2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated
2083434 - The provider-credential-controller did not support the RHV credentials type
2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace
2083870 - When editing an existing application and refreshing the Select an existing placement configuration
, multiple occurrences of the placementrule gets displayed
2084034 - The status message looks messy in the policy set card, suggest one kind status one a row
2084158 - Support provisioning bm cluster where no provisioning network provided
2084622 - Local Helm application shows cluster resources as Not Deployed
in Topology [Upgrade]
2085083 - Policies fail to copy to cluster namespace after ACM upgrade
2085237 - Resources referenced by a channel are not annotated with backup label
2085273 - Error querying for ansible job in app topology
2085281 - Template name error is reported but the template name was found in a different replicated policy
2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page
2087515 - Validation thrown out in configuration for disconnect install while creating bm credential
2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade]
2088511 - Some cluster resources are not showing labels that are defined in the YAML
- It increases application response times and allows for dramatically improving performance while providing availability, reliability, and elastic scale. Find out more about Data Grid 8.4.0 in the Release Notes[3].
Security Fix(es):
-
prismjs: improperly escaped output allows a XSS (CVE-2022-23647)
-
snakeyaml: Denial of Service due to missing nested depth limitation for collections (CVE-2022-25857)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
netty: world readable temporary file containing sensitive data (CVE-2022-24823)
-
snakeyaml: Uncaught exception in org.yaml.snakeyaml.composer.Composer.composeSequenceNode (CVE-2022-38749)
-
snakeyaml: Uncaught exception in org.yaml.snakeyaml.constructor.BaseConstructor.constructObject (CVE-2022-38750)
-
snakeyaml: Uncaught exception in java.base/java.util.regex.Pattern$Ques.match (CVE-2022-38751)
-
snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode (CVE-2022-38752)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
To install this update, do the following:
- Download the Data Grid 8.4.0 Server patch from the customer portal[²]. Back up your existing Data Grid installation. You should back up databases, configuration files, and so on. Install the Data Grid 8.4.0 Server patch. Restart Data Grid to ensure the changes take effect.
For more information about Data Grid 8.4.0, refer to the 8.4.0 Release Notes[³]
- Bugs fixed (https://bugzilla.redhat.com/):
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2056643 - CVE-2022-23647 prismjs: improperly escaped output allows a XSS 2087186 - CVE-2022-24823 netty: world readable temporary file containing sensitive data 2126789 - CVE-2022-25857 snakeyaml: Denial of Service due to missing nested depth limitation for collections 2129706 - CVE-2022-38749 snakeyaml: Uncaught exception in org.yaml.snakeyaml.composer.Composer.composeSequenceNode 2129707 - CVE-2022-38750 snakeyaml: Uncaught exception in org.yaml.snakeyaml.constructor.BaseConstructor.constructObject 2129709 - CVE-2022-38751 snakeyaml: Uncaught exception in java.base/java.util.regex.Pattern$Ques.match 2129710 - CVE-2022-38752 snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202201-0349", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "node-fetch", "scope": "lt", "trust": 1.0, "vendor": "node fetch", "version": "3.1.1" }, { "model": "node-fetch", "scope": "gte", "trust": 1.0, "vendor": "node fetch", "version": "3.0.0" }, { "model": "node-fetch", "scope": "lt", "trust": 1.0, "vendor": "node fetch", "version": "2.6.7" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "node-fetch", "scope": null, "trust": 0.8, "vendor": "node fetch \u30d7\u30ed\u30b8\u30a7\u30af\u30c8", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:node-fetch_project:node-fetch:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "2.6.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:node-fetch_project:node-fetch:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "3.1.1", "versionStartIncluding": "3.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-0235" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" } ], "trust": 0.8 }, "cve": "CVE-2022-0235", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 5.8, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2022-0235", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.1, "baseSeverity": "MEDIUM", "confidentialityImpact": "LOW", "exploitabilityScore": 2.8, "impactScore": 2.7, "integrityImpact": "LOW", "privilegesRequired": "NONE", "scope": "CHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N", "version": "3.1" }, { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "security@huntr.dev", "availabilityImpact": "HIGH", "baseScore": 8.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.8, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 6.1, "baseSeverity": "Medium", "confidentialityImpact": "Low", "exploitabilityScore": null, "id": "CVE-2022-0235", "impactScore": null, "integrityImpact": "Low", "privilegesRequired": "None", "scope": "Changed", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-0235", "trust": 1.8, "value": "MEDIUM" }, { "author": "security@huntr.dev", "id": "CVE-2022-0235", "trust": 1.0, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2022-0235", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "NVD", "id": "CVE-2022-0235" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor. node-fetch Exists in an open redirect vulnerability.Information may be obtained and information may be tampered with. The purpose of this text-only\nerrata is to inform you about the security issues fixed in this release. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update\nAdvisory ID: RHSA-2022:6156-01\nProduct: RHODF\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6156\nIssue date: 2022-08-24\nCVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528\n CVE-2022-0235 CVE-2022-0536 CVE-2022-0670\n CVE-2022-1292 CVE-2022-1586 CVE-2022-1650\n CVE-2022-1785 CVE-2022-1897 CVE-2022-1927\n CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n CVE-2022-23806 CVE-2022-24675 CVE-2022-24771\n CVE-2022-24772 CVE-2022-24773 CVE-2022-24785\n CVE-2022-24921 CVE-2022-25313 CVE-2022-25314\n CVE-2022-27774 CVE-2022-27776 CVE-2022-27782\n CVE-2022-28327 CVE-2022-29526 CVE-2022-29810\n CVE-2022-29824 CVE-2022-31129\n====================================================================\n1. Summary:\n\nUpdated images that include numerous enhancements, security, and bug fixes\nare now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat\nEnterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Data Foundation is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. Red Hat\nOpenShift Data Foundation is a highly scalable, production-grade persistent\nstorage for stateful applications running in the Red Hat OpenShift\nContainer Platform. In addition to persistent storage, Red Hat OpenShift\nData Foundation provisions a multicloud data management service with an S3\ncompatible API. \n\nSecurity Fix(es):\n\n* eventsource: Exposure of Sensitive Information (CVE-2022-1650)\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n\n* nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n(CVE-2021-23440)\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n\n* node-forge: Signature verification leniency in checking `digestAlgorithm`\nstructure can lead to signature forgery (CVE-2022-24771)\n\n* node-forge: Signature verification failing to check tailing garbage bytes\ncan lead to signature forgery (CVE-2022-24772)\n\n* node-forge: Signature verification leniency in checking `DigestInfo`\nstructure (CVE-2022-24773)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\nThese updated images include numerous enhancements and bug fixes. Space\nprecludes documenting all of these changes in this advisory. Users are\ndirected to the Red Hat OpenShift Data Foundation Release Notes for\ninformation on the most significant of these changes:\n\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\nAll Red Hat OpenShift Data Foundation users are advised to upgrade to these\nupdated images, which provide numerous bug fixes and enhancements. \n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. For details on how to apply this\nupdate, refer to: https://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937117 - Deletion of StorageCluster doesn\u0027t remove ceph toolbox pod\n1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified\n1973317 - libceph: read_partial_message and bad crc/signature errors\n1996829 - Permissions assigned to ceph auth principals when using external storage are too broad\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2027724 - Warning log for rook-ceph-toolbox in ocs-operator log\n2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056697 - odf-csi-addons-operator subscription failed while using custom catalog source\n2058211 - Add validation for CIDR field in DRPolicy\n2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced\n2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10\n2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE\n2063691 - [GSS] [RFE] Add termination policy to s3 route\n2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2066514 - OCS operator to install Ceph prometheus alerts instead of Rook\n2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route\n2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking `digestAlgorithm` structure can lead to signature forgery\n2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery\n2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking `DigestInfo` structure\n2069314 - OCS external mode should allow specifying names for all Ceph auth principals\n2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster. \n2069812 - must-gather: rbd_vol_and_snap_info collection is broken\n2069815 - must-gather: essential rbd mirror command outputs aren\u0027t collected\n2070542 - After creating a new storage system it redirects to 404 error page instead of the \"StorageSystems\" page for OCP 4.11\n2071494 - [DR] Applications are not getting deployed\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty\n2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled\n2075426 - 4.10 must gather is not available after GA of 4.10\n2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in \"Progressing\" state although all the openshift-storage pods are up and Running\n2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost\n2077242 - vg-manager missing permissions\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2079866 - [DR] odf-multicluster-console is in CLBO state\n2079873 - csi-nfsplugin pods are not coming up after successful patch request to update \"ROOK_CSI_ENABLE_NFS\": \"true\"\u0027\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2081680 - Add the LVM Operator into the Storage category in OperatorHub\n2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation\n2082078 - OBC\u0027s not getting created on primary cluster when manageds3 set as \"true\" for mirrorPeer\n2082497 - Do not filter out removable devices\n2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)\n2083441 - LVM operator should deploy the volumesnapshotclass resource\n2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status\n2083993 - Add missing pieces for storageclassclaim\n2084041 - [Console Migration] Link-able storage system name directs to blank page\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided\"\n2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates\n2084546 - [Console Migration] Provider details absent under backing store in UI\n2084565 - [Console Migration] The creation of new backing store , directs to a blank page\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred\n2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace\n2086557 - Thin pool in lvm operator doesn\u0027t use all disks\n2086675 - [UI]No option to \"add capacity\" via the Installed Operators tab\n2086982 - ODF 4.11 deployment is failing\n2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm\n2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2087107 - Set default storage class if none is set\n2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message\n2087675 - ocs-metrics-exporter pod crashes on odf v4.11\n2087732 - [Console Migration] Events page missing under new namespace store\n2087755 - [Console Migration] Bucket Class details page doesn\u0027t have the complete details in UI\n2088359 - Send VG Metrics even if storage is being consumed from thinPool alone\n2088380 - KMS using vault on standalone MCG cluster is not enabled\n2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint\n2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook\n2089296 - [MS v2] Storage cluster in error phase and \u0027ocs-provider-qe\u0027 addon installation failed with ODF 4.10.2\n2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts\n2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9. \n2089552 - [MS v2] Cannot create StorageClassClaim\n2089567 - [Console Migration] Improve the styling of Various Components\n2089786 - [Console Migration] \"Attach to deployment\" option is missing in kebab menu for Object Bucket Claims . \n2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket. \n2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed\n2090278 - [LVMO] Some containers are missing resource requirements and limits\n2090314 - [LVMO] CSV is missing some useful annotations\n2090953 - [MCO] DRCluster created under default namespace\n2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics\n2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool. \n2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference\n2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external\n2091894 - All backingstores in cluster spontaneously change their own secret\n2091951 - [GSS] OCS pods are restarting due to liveness probe failure\n2091998 - Volume Snapshots not work with external restricted mode\n2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool\n2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks\n2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)\n2092349 - Enable zeroing on the thin-pool during creation\n2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase\n2092400 - [MS v2] StorageClassClaim creation is failing with error \"no StorageCluster found\"\n2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically\n2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2094179 - MCO fails to create DRClusters when replication mode is synchronous\n2094853 - [Console Migration] Description under storage class drop down in add capacity is missing . \n2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2095155 - Use tool `black` to format the python external script\n2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster\n2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time\n2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page\n2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened\n2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False\n2096937 - Storage - Data Foundation: i18n misses\n2097216 - Collect StorageClassClaim details in must-gather\n2097287 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2097305 - Add translations for ODF 4.11\n2098121 - Managed ODF not getting detected\n2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled\n2099581 - StorageClassClaim with encryption gets into Failed state\n2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project\n2099646 - Block pool list page kebab action menu is showing empty options\n2099660 - OCS dashbaords not appearing unless user clicks on \"Overview\" Tab\n2099724 - S3 secret namespace on the managed cluster doesn\u0027t match with the namespace in the s3profile\n2099965 - rbd: provide option to disable setting metadata on RBD images\n2100326 - [ODF to ODF] Volume snapshot creation failed\n2100352 - Make lvmo pod labels more uniform\n2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary\n2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install\n2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection\n2103818 - Restored snapshot don\u0027t have any content\n2104833 - Need to update configmap for IBM storage odf operator GA\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23440\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0536\nhttps://access.redhat.com/security/cve/CVE-2022-0670\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1650\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24771\nhttps://access.redhat.com/security/cve/CVE-2022-24772\nhttps://access.redhat.com/security/cve/CVE-2022-24773\nhttps://access.redhat.com/security/cve/CVE-2022-24785\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-29526\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy\nXo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N\nb3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY\nsJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj\ngz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am\n2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf\nmX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl\nCSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE\nhTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY\n3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E\n2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V\nERCapzwcRuQ=gV+z\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nnodejs (14.21.3). Bugs fixed (https://bugzilla.redhat.com/):\n\n2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names\n2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection\n2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields\n2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address\n2142822 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.6.0.z]\n2150323 - CVE-2022-24999 express: \"qs\" prototype poisoning causes the hang of the node process\n2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service\n2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability\n2168631 - CVE-2022-4904 c-ares: buffer overflow in config_sortlist() due to missing string length check\n2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS\n2171935 - CVE-2023-23918 Node.js: Permissions policies can be bypassed via process.mainModule\n2172217 - CVE-2023-23920 Node.js: insecure loading of ICU data through ICU_DATA environment variable\n2175827 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.6.0.z]\n\n6. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. ==========================================================================\nUbuntu Security Notice USN-6158-1\nJune 13, 2023\n\nnode-fetch vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS (Available with Ubuntu Pro)\n\nSummary:\n\nNode Fetch could be made to expose sensitive information if it opened a\nspecially crafted file. \n\nSoftware Description:\n- node-fetch: A light-weight module that brings the Fetch API to Node.js\n\nDetails:\n\nIt was discovered that Node Fetch incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to obtain\nsensitive information. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n node-fetch 1.7.3-2ubuntu0.1\n\nUbuntu 18.04 LTS (Available with Ubuntu Pro):\n node-fetch 1.7.3-1ubuntu0.1~esm1\n\nIn general, a standard system update will make all the necessary changes. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console\n2040693 - ?Replication repository? wizard has no validation for name length\n2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com?\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace\n2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. \n2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade\n2061335 - [MTC UI] ?Update cluster? button is not getting disabled\n2062266 - MTC UI does not display logs properly [OADP-BL]\n2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend\n2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2076593 - Velero pod log missing from UI drop down\n2076599 - Velero pod log missing from downloaded logs folder [OADP-BL]\n2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan\n2079252 - [MTC] Rsync options logs not visible in log-reader pod\n2082221 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [UI]\n2082225 - non-numeric user when launching stage pods [OADP-BL]\n2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments\n2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods\n2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels\n2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL]\n2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts\n2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL]\n2096939 - Fix legacy operator.yml inconsistencies and errors\n2100486 - [MTC UI] Target storage class field is not getting respected when clusters don\u0027t have replication repo configured. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.5.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes: \n\n* nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)\n\n* containerd: Unprivileged pod may bind mount any privileged regular file\non disk (CVE-2021-43816)\n\n* minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)\n\n* openssl: Infinite loop in BN_mod_sqrt() reachable when parsing\ncertificates (CVE-2022-0778)\n\n* imgcrypt: Unauthorized access to encryted container image on a shared\nsystem due to missing check in CheckAuthorization() code path\n(CVE-2022-24778)\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nBug fixes:\n\n* RFE Copy secret with specific secret namespace, name for source and name,\nnamespace and cluster label for target (BZ# 2014557)\n\n* RHACM 2.5.0 images (BZ# 2024938)\n\n* [UI] When you delete host agent from infraenv no confirmation message\nappear (Are you sure you want to delete x?) (BZ#2028348)\n\n* Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller\nnot working properly (BZ# 2028647)\n\n* create cluster pool -\u003e choose infra type, As a result infra providers\ndisappear from UI. (BZ# 2033339)\n\n* Restore/backup shows up as Validation failed but the restore backup\nstatus in ACM shows success (BZ# 2034279)\n\n* Observability - OCP 311 node role are not displayed completely (BZ#\n2038650)\n\n* Documented uninstall procedure leaves many leftovers (BZ# 2041921)\n\n* infrastructure-operator pod crashes due to insufficient privileges in ACM\n2.5 (BZ# 2046554)\n\n* Acm failed to install due to some missing CRDs in operator (BZ# 2047463)\n\n* Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)\n\n* ACM home page now includes /home/ in url (BZ# 2051299)\n\n* proxy heading in Add Credential should be capitalized (BZ# 2051349)\n\n* ACM 2.5 tries to create new MCE instance when install on top of existing\nMCE 2.0 (BZ# 2051983)\n\n* Create Policy button does not work and user cannot use console to create\npolicy (BZ# 2053264)\n\n* No cluster information was displayed after a policyset was created (BZ#\n2053366)\n\n* Dynamic plugin update does not take effect in Firefox (BZ# 2053516)\n\n* Replicated policy should not be available when creating a Policy Set (BZ#\n2054431)\n\n* Placement section in Policy Set wizard does not reset when users click\n\"Back\" to re-configured placement (BZ# 2054433)\n\n3. Solution:\n\nFor Red Hat Advanced Cluster Management for Kubernetes, see the following\ndocumentation, which will be updated shortly for this release, for\nimportant\ninstructions on installing this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2028224 - RHACM 2.5.0 images\n2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)\n2028647 - Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller not working properly\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2033339 - create cluster pool -\u003e choose infra type , As a result infra providers disappear from UI. \n2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub\n2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO\n2073355 - Get blank page when click policy with unknown status in Governance -\u003e Overview page\n2073508 - Thread responsible to get insights data from *ks clusters is broken\n2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters\n2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology\n2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin\n2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of \"resource conflict\" error\n2074178 - Editing Helm Argo Applications does not Prune Old Resources\n2074626 - Policy placement failure during ZTP SNO scale test\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal\n2074937 - UI allows creating cluster even when there are no ClusterImageSets\n2075416 - infraEnv failed to create image after restore\n2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod\n2075739 - The lookup function won\u0027t check the referred resource whether exist when using template policies\n2076421 - Can\u0027t select existing placement for policy or policyset when editing policy or policyset\n2076494 - No policyreport CR for spoke clusters generated in the disconnected env\n2076502 - The policyset card doesn\u0027t show the cluster status(violation/without violation) again after deleted one policy\n2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator\n2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster\n2077291 - Prometheus doesn\u0027t display acm_managed_cluster_info after upgrade from 2.4 to 2.5\n2077304 - Create Cluster button is disabled only if other clusters exist\n2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5\n2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI\n2077751 - Can\u0027t create a template policy from UI when the object\u0027s name is referring Golang text template syntax in this policy\n2077783 - Still show violation for clusterserviceversions after enforced \"Detect Image vulnerabilities \" policy template and the operator is installed\n2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set\n2078164 - Failed to edit a policy without placement\n2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement\n2078373 - Disable the hyperlink of *ks node in standalone MCE environment since the search component was not exists\n2078617 - Azure public credential details get pre-populated with base domain name in UI\n2078952 - View pod logs in search details returns error\n2078973 - Crashed pod is marked with success in Topology\n2079013 - Changing existing placement rules does not change YAML file\n2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM\n2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on\n2079494 - Hitting Enter in yaml editor caused unexpected keys \"key00x:\" to be created\n2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5\n2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page\n2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted\n2079615 - Edit appset placement in UI with a new placement throws error upon submitting\n2079658 - Cluster Count is Incorrect in Application UI\n2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower\n2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters\n2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080503 - vSphere network name doesn\u0027t allow entering spaces and doesn\u0027t reflect YAML changes\n2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page\n2080712 - Select an existing placement configuration does not work\n2080776 - Unrecognized characters are displayed on policy and policy set yaml editors\n2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster\n2081810 - Type \u0027-\u0027 character in Name field caused previously typed character backspaced in in the name field of policy wizard\n2081829 - Application deployed on local cluster\u0027s topology is crashing after upgrade\n2081938 - The deleted policy still be shown on the policyset review page when edit this policy set\n2082226 - Object Storage Topology includes residue of resources after Upgrade\n2082409 - Policy set details panel remains even after the policy set has been deleted\n2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets\n2083038 - Warning still refers to the `klusterlet-addon-appmgr` pod rather than the `application-manager` pod\n2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated\n2083434 - The provider-credential-controller did not support the RHV credentials type\n2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace\n2083870 - When editing an existing application and refreshing the `Select an existing placement configuration`, multiple occurrences of the placementrule gets displayed\n2084034 - The status message looks messy in the policy set card, suggest one kind status one a row\n2084158 - Support provisioning bm cluster where no provisioning network provided\n2084622 - Local Helm application shows cluster resources as `Not Deployed` in Topology [Upgrade]\n2085083 - Policies fail to copy to cluster namespace after ACM upgrade\n2085237 - Resources referenced by a channel are not annotated with backup label\n2085273 - Error querying for ansible job in app topology\n2085281 - Template name error is reported but the template name was found in a different replicated policy\n2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page\n2087515 - Validation thrown out in configuration for disconnect install while creating bm credential\n2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade]\n2088511 - Some cluster resources are not showing labels that are defined in the YAML\n\n5. \nIt increases application response times and allows for dramatically\nimproving performance while providing availability, reliability, and\nelastic scale. Find out more about Data Grid 8.4.0 in the Release Notes[3]. \n\nSecurity Fix(es):\n\n* prismjs: improperly escaped output allows a XSS (CVE-2022-23647)\n\n* snakeyaml: Denial of Service due to missing nested depth limitation for\ncollections (CVE-2022-25857)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* netty: world readable temporary file containing sensitive data\n(CVE-2022-24823)\n\n* snakeyaml: Uncaught exception in\norg.yaml.snakeyaml.composer.Composer.composeSequenceNode (CVE-2022-38749)\n\n* snakeyaml: Uncaught exception in\norg.yaml.snakeyaml.constructor.BaseConstructor.constructObject\n(CVE-2022-38750)\n\n* snakeyaml: Uncaught exception in\njava.base/java.util.regex.Pattern$Ques.match (CVE-2022-38751)\n\n* snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode\n(CVE-2022-38752)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nTo install this update, do the following:\n \n1. Download the Data Grid 8.4.0 Server patch from the customer portal[\u00b2]. Back up your existing Data Grid installation. You should back up\ndatabases, configuration files, and so on. Install the Data Grid 8.4.0 Server patch. Restart Data Grid to ensure the changes take effect. \n\nFor more information about Data Grid 8.4.0, refer to the 8.4.0 Release\nNotes[\u00b3]\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2056643 - CVE-2022-23647 prismjs: improperly escaped output allows a XSS\n2087186 - CVE-2022-24823 netty: world readable temporary file containing sensitive data\n2126789 - CVE-2022-25857 snakeyaml: Denial of Service due to missing nested depth limitation for collections\n2129706 - CVE-2022-38749 snakeyaml: Uncaught exception in org.yaml.snakeyaml.composer.Composer.composeSequenceNode\n2129707 - CVE-2022-38750 snakeyaml: Uncaught exception in org.yaml.snakeyaml.constructor.BaseConstructor.constructObject\n2129709 - CVE-2022-38751 snakeyaml: Uncaught exception in java.base/java.util.regex.Pattern$Ques.match\n2129710 - CVE-2022-38752 snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "172897" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" } ], "trust": 2.52 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-0235", "trust": 3.6 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.1 }, { "db": "JVNDB", "id": "JVNDB-2022-003319", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2022-0235", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166812", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168657", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168150", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167622", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171839", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "172897", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167679", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167459", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169935", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "172897" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "id": "VAR-202201-0349", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-07-23T21:17:54.278000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "SSA-637483", "trust": 0.8, "url": "https://lists.debian.org/debian-lts-announce/2022/12/msg00007.html" }, { "title": "Red Hat: Moderate: nodejs:14 security, bug fix, and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230050 - security advisory" }, { "title": "Red Hat: CVE-2022-0235", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-0235" }, { "title": "Red Hat: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230612 - security advisory" }, { "title": "Red Hat: Important: Red Hat Data Grid 8.4.0 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228524 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.1.2.1 containers security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221739 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.10 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221715 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.4 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221681 - security advisory" }, { "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.4.2 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220735 - security advisory" }, { "title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226156 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.8 security and container updates", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221083 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.3 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221476 - security advisory" }, { "title": "IBM: Security Bulletin: IBM QRadar Assistant app for IBM QRadar SIEM includes components with multiple known vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=0c5e20c044e4005143b2303b28407553" }, { "title": "IBM: Security Bulletin: Multiple security vulnerabilities are addressed with IBM Business Automation Manager Open Editions 8.0.1", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ac267c598ae2a2882a98ed5463cc028d" }, { "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.2 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225483 - security advisory" }, { "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20224956 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.11 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225392 - security advisory" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory" }, { "title": "npcheck", "trust": 0.1, "url": "https://github.com/nodeshift/npcheck " }, { "title": "", "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2022-0235 " } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-200", "trust": 1.0 }, { "problemtype": "Open redirect (CWE-601) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235" }, { "trust": 1.1, "url": "https://huntr.dev/bounties/d26ab655-38d6-48b3-be15-f9ad6b6ae6f7" }, { "trust": 1.1, "url": "https://github.com/node-fetch/node-fetch/commit/36e47e8a6406185921e4985dcbeff140d73eaa10" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2022/12/msg00007.html" }, { "trust": 0.8, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.8, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.8, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-0536" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536" }, { "trust": 0.3, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-24785" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3752" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4157" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3744" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-13974" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-45485" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3773" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4002" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43976" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-0941" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43389" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-44733" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4037" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-29154" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-37159" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3772" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-0404" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3669" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3764" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-20322" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43056" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-41864" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4197" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3612" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-26401" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-27820" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3743" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1011" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4083" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-45486" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0322" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-4788" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0286" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0001" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3759" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-21781" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0002" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4203" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-42739" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0778" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23852" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0492" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24778" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24450" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43565" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24773" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24771" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23647" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23647" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24772" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25857" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25857" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-31129" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-29526" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21803" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-19131" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/200.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0050" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.1, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-assistant-app-for-ibm-qradar-siem-includes-components-with-multiple-known-vulnerabilities/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0413" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25236" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22827" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0392" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23308" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0330" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22942" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-45960" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-46143" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0361" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0847" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0261" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22826" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22825" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0920" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0155" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0359" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0413" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0435" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22822" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1476" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0144" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0318" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22823" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25315" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25235" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0144" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6835" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25647" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37136" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21724" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41269" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26520" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25647" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22569" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-37734" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0981" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41269" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21724" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0981" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22569" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.1, "url": "https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1650" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6156" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1708" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3696" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38185" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3697" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28734" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3695" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28735" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5392" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-23918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-35065" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-43548" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24999" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38900" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4904" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44533" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-23920" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-35256" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:1742" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25881" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44531" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-4904" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25881" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38900" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-6158-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/node-fetch/1.7.3-2ubuntu0.1" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1154" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5483" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35492" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3752" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43858" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3743" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3764" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37159" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4157" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43816" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3759" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4037" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3744" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:4956" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38749" }, { "trust": 0.1, "url": "https://access.redhat.com/jbossnetwork/restricted/softwaredetail.html?softwareid=70381\u0026product=data.grid\u0026version=8.4\u0026downloadtype=patches" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8524" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38750" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38749" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38752" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.4/html-single/red_hat_data_grid_8.4_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38752" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38751" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38750" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38751" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "172897" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "172897" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-01-16T00:00:00", "db": "VULMON", "id": "CVE-2022-0235" }, { "date": "2023-02-14T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "date": "2022-04-21T15:12:25", "db": "PACKETSTORM", "id": "166812" }, { "date": "2022-10-07T15:02:16", "db": "PACKETSTORM", "id": "168657" }, { "date": "2022-08-25T15:22:18", "db": "PACKETSTORM", "id": "168150" }, { "date": "2022-06-29T20:27:02", "db": "PACKETSTORM", "id": "167622" }, { "date": "2023-04-12T16:57:08", "db": "PACKETSTORM", "id": "171839" }, { "date": "2023-06-13T21:27:37", "db": "PACKETSTORM", "id": "172897" }, { "date": "2022-07-01T15:04:32", "db": "PACKETSTORM", "id": "167679" }, { "date": "2022-06-09T16:11:52", "db": "PACKETSTORM", "id": "167459" }, { "date": "2022-11-18T14:27:39", "db": "PACKETSTORM", "id": "169935" }, { "date": "2022-01-16T17:15:07.870000", "db": "NVD", "id": "CVE-2022-0235" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-03T00:00:00", "db": "VULMON", "id": "CVE-2022-0235" }, { "date": "2023-02-14T04:12:00", "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "date": "2023-02-03T19:16:07.090000", "db": "NVD", "id": "CVE-2022-0235" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "172897" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "node-fetch\u00a0 Open redirect vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003319" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code execution, xss", "sources": [ { "db": "PACKETSTORM", "id": "168657" } ], "trust": 0.1 } }
var-202201-1080
Vulnerability from variot
There is a carry propagation bug in the MIPS32 and MIPS64 squaring procedure. Many EC algorithms are affected, including some of the TLS 1.3 default curves. Impact was not analyzed in detail, because the pre-requisites for attack are considered unlikely and include reusing private keys. Analysis suggests that attacks against RSA and DSA as a result of this defect would be very difficult to perform and are not believed likely. Attacks against DH are considered just feasible (although very difficult) because most of the work necessary to deduce information about a private key may be performed offline. The amount of resources required for such an attack would be significant. However, for an attack on TLS to be meaningful, the server would have to share the DH private key among multiple clients, which is no longer an option since CVE-2016-0701. This issue affects OpenSSL versions 1.0.2, 1.1.1 and 3.0.0. It was addressed in the releases of 1.1.1m and 3.0.1 on the 15th of December 2021. For the 1.0.2 release it is addressed in git commit 6fc1aaaf3 that is available to premium support customers only. It will be made available in 1.0.2zc when it is released. The issue only affects OpenSSL on MIPS platforms. Fixed in OpenSSL 3.0.1 (Affected 3.0.0). Fixed in OpenSSL 1.1.1m (Affected 1.1.1-1.1.1l). Fixed in OpenSSL 1.0.2zc-dev (Affected 1.0.2-1.0.2zb). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Gentoo Linux Security Advisory GLSA 202210-02
https://security.gentoo.org/
Severity: Normal Title: OpenSSL: Multiple Vulnerabilities Date: October 16, 2022 Bugs: #741570, #809980, #832339, #835343, #842489, #856592 ID: 202210-02
Synopsis
Multiple vulnerabilities have been discovered in OpenSSL, the worst of which could result in denial of service.
Background
OpenSSL is an Open Source toolkit implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) as well as a general purpose cryptography library.
Affected packages
-------------------------------------------------------------------
Package / Vulnerable / Unaffected
-------------------------------------------------------------------
1 dev-libs/openssl < 1.1.1q >= 1.1.1q
Description
Multiple vulnerabilities have been discovered in OpenSSL. Please review the CVE identifiers referenced below for details.
Impact
Please review the referenced CVE identifiers for details.
Workaround
There is no known workaround at this time.
Resolution
All OpenSSL users should upgrade to the latest version:
# emerge --sync # emerge --ask --oneshot --verbose ">=dev-libs/openssl-1.1.1q"
References
[ 1 ] CVE-2020-1968 https://nvd.nist.gov/vuln/detail/CVE-2020-1968 [ 2 ] CVE-2021-3711 https://nvd.nist.gov/vuln/detail/CVE-2021-3711 [ 3 ] CVE-2021-3712 https://nvd.nist.gov/vuln/detail/CVE-2021-3712 [ 4 ] CVE-2021-4160 https://nvd.nist.gov/vuln/detail/CVE-2021-4160 [ 5 ] CVE-2022-0778 https://nvd.nist.gov/vuln/detail/CVE-2022-0778 [ 6 ] CVE-2022-1292 https://nvd.nist.gov/vuln/detail/CVE-2022-1292 [ 7 ] CVE-2022-1473 https://nvd.nist.gov/vuln/detail/CVE-2022-1473 [ 8 ] CVE-2022-2097 https://nvd.nist.gov/vuln/detail/CVE-2022-2097
Availability
This GLSA and any updates to it are available for viewing at the Gentoo Security Website:
https://security.gentoo.org/glsa/202210-02
Concerns?
Security is a primary focus of Gentoo Linux and ensuring the confidentiality and security of our users' machines is of utmost importance to us. Any security concerns should be addressed to security@gentoo.org or alternatively, you may file a bug at https://bugs.gentoo.org.
License
Copyright 2022 Gentoo Foundation, Inc; referenced text belongs to its owner(s).
The contents of this document are licensed under the Creative Commons - Attribution / Share Alike license.
https://creativecommons.org/licenses/by-sa/2.5 . -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Debian Security Advisory DSA-5103-1 security@debian.org https://www.debian.org/security/ Salvatore Bonaccorso March 15, 2022 https://www.debian.org/security/faq
Package : openssl CVE ID : CVE-2021-4160 CVE-2022-0778 Debian Bug : 989604
Tavis Ormandy discovered that the BN_mod_sqrt() function of OpenSSL could be tricked into an infinite loop. This could result in denial of service via malformed certificates.
For the oldstable distribution (buster), this problem has been fixed in version 1.1.1d-0+deb10u8.
For the stable distribution (bullseye), this problem has been fixed in version 1.1.1k-1+deb11u2.
For the detailed security status of openssl please refer to its security tracker page at: https://security-tracker.debian.org/tracker/openssl
Further information about Debian Security Advisories, how to apply these updates to your system and frequently asked questions can be found at: https://www.debian.org/security/
Mailing list: debian-security-announce@lists.debian.org -----BEGIN PGP SIGNATURE-----
iQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmIwxQtfFIAAAAAALgAo aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2 NDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND z0R2qw//c0GbzcbXlLfibf7Nki5CMJUdWqx1si8O2uQ1vKxgC07rCAx1Lrw0TtIl Tq1vYRtSbvy8P4Qn3E6/lbSYTnM7JbkriZ1HS3Mw4VFlOBA8lWMif4KotrcMAoYE IOQlhhTCkKZM8cL4YKDwN7XSy5LSdt/sw5rIi1ZpgVTEXQeKIDPa5WK6YyIGNG6k h83TPYZp+8e3Fuoubb8RY5CUfFomdMHRazHcrCkjY+yvFTFdKbUza9RjUs44xu2Z ZUTfIddR8D8mWfKOyvAVMw0A7/zjFW1IX0vC0RhHwjrulLgJbqWvcYQgEJy/wOKd tWjVwGya7+Fxn6GFL0rHZP/OFq9mDwxyBDfDg/hD+TSnbxtyHIxUH4QoWdPPgJxP ahln2TNfsnQsCopdn9dJ/XOrkC35R7Jp11kmX8MCTP6k8ob4mdQIACcRND/jcPgT tOBoUBCrha98Qvdh6UAGegTxqOBaNhG52fpNjEegq/q7kxlugdOtbY1nZXvuHHI5 C9Gd6e4JqpRlMDuT7rC8qchXJM8VnhWdVdz95gkeQCA21+AGJ+CEvTpSRPY6qCrM rUvS3HVrBFNLWNlsA68or3y8CfxjFbpXnSxflCmoBtmAp6z9TXm59Fu7N6Qqkpom yV0hQAqqeFa9u3NZKoNrj/FGWYXZ+zMt+jifRLokuB0IhFUOJ70= =SB84 -----END PGP SIGNATURE----- . If that applies then:
OpenSSL 1.0.2 users should apply git commit 6fc1aaaf3 (premium support customers only) OpenSSL 1.1.1 users should upgrade to 1.1.1m OpenSSL 3.0.0 users should upgrade to 3.0.1
This issue was found on the 10th of December 2021 and subsequently fixed by Bernd Edlinger.
Note
OpenSSL 1.0.2 is out of support and no longer receiving public updates.
References
URL for this Security Advisory: https://www.openssl.org/news/secadv/20220128.txt
Note: the online version of the advisory may be updated with additional details over time.
For details of OpenSSL severity classifications please see: https://www.openssl.org/policies/secpolicy.html
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202201-1080", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "enterprise manager ops center", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "12.4.0.0" }, { "model": "openssl", "scope": "lt", "trust": 1.0, "vendor": "openssl", "version": "1.1.1m" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.1.1" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "openssl", "scope": "eq", "trust": 1.0, "vendor": "openssl", "version": "3.0.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "11.0" }, { "model": "health sciences inform publisher", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "6.2.1.1" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "9.0" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.59" }, { "model": "jd edwards enterpriseone tools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "9.2.6.3" }, { "model": "health sciences inform publisher", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "6.3.1.1" }, { "model": "openssl", "scope": "lte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2zb" }, { "model": "openssl", "scope": "gte", "trust": 1.0, "vendor": "openssl", "version": "1.0.2" }, { "model": "jd edwards world security", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "a9.4" }, { "model": "peoplesoft enterprise peopletools", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "8.58" } ], "sources": [ { "db": "NVD", "id": "CVE-2021-4160" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha10:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha11:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha12:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha13:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha14:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha15:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha16:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha17:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha3:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha4:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha5:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha6:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha7:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha8:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:alpha9:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:beta1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:3.0.0:beta2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndIncluding": "1.0.2zb", "versionStartIncluding": "1.0.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:openssl:openssl:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.1.1m", "versionStartIncluding": "1.1.1", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:9.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:11.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_world_security:a9.4:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.58:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:peoplesoft_enterprise_peopletools:8.59:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:jd_edwards_enterpriseone_tools:9.2.6.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_inform_publisher:6.3.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:health_sciences_inform_publisher:6.2.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:oracle:enterprise_manager_ops_center:12.4.0.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-4160" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Siemens reported these vulnerabilities to CISA.", "sources": [ { "db": "CNNVD", "id": "CNNVD-202201-2650" } ], "trust": 0.6 }, "cve": "CVE-2021-4160", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 4.3, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "VULMON", "availabilityImpact": "NONE", "baseScore": 4.3, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "id": "CVE-2021-4160", "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "MEDIUM", "trust": 0.1, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:N/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "HIGH", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.9, "baseSeverity": "MEDIUM", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.2, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:N/A:N", "version": "3.1" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-4160", "trust": 1.0, "value": "MEDIUM" }, { "author": "CNNVD", "id": "CNNVD-202201-2650", "trust": 0.6, "value": "MEDIUM" }, { "author": "VULMON", "id": "CVE-2021-4160", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-4160" }, { "db": "NVD", "id": "CVE-2021-4160" }, { "db": "CNNVD", "id": "CNNVD-202201-2650" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "There is a carry propagation bug in the MIPS32 and MIPS64 squaring procedure. Many EC algorithms are affected, including some of the TLS 1.3 default curves. Impact was not analyzed in detail, because the pre-requisites for attack are considered unlikely and include reusing private keys. Analysis suggests that attacks against RSA and DSA as a result of this defect would be very difficult to perform and are not believed likely. Attacks against DH are considered just feasible (although very difficult) because most of the work necessary to deduce information about a private key may be performed offline. The amount of resources required for such an attack would be significant. However, for an attack on TLS to be meaningful, the server would have to share the DH private key among multiple clients, which is no longer an option since CVE-2016-0701. This issue affects OpenSSL versions 1.0.2, 1.1.1 and 3.0.0. It was addressed in the releases of 1.1.1m and 3.0.1 on the 15th of December 2021. For the 1.0.2 release it is addressed in git commit 6fc1aaaf3 that is available to premium support customers only. It will be made available in 1.0.2zc when it is released. The issue only affects OpenSSL on MIPS platforms. Fixed in OpenSSL 3.0.1 (Affected 3.0.0). Fixed in OpenSSL 1.1.1m (Affected 1.1.1-1.1.1l). Fixed in OpenSSL 1.0.2zc-dev (Affected 1.0.2-1.0.2zb). - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\nGentoo Linux Security Advisory GLSA 202210-02\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n https://security.gentoo.org/\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\n Severity: Normal\n Title: OpenSSL: Multiple Vulnerabilities\n Date: October 16, 2022\n Bugs: #741570, #809980, #832339, #835343, #842489, #856592\n ID: 202210-02\n\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\n\nSynopsis\n========\n\nMultiple vulnerabilities have been discovered in OpenSSL, the worst of\nwhich could result in denial of service. \n\nBackground\n==========\n\nOpenSSL is an Open Source toolkit implementing the Secure Sockets Layer\n(SSL v2/v3) and Transport Layer Security (TLS v1) as well as a general\npurpose cryptography library. \n\nAffected packages\n=================\n\n -------------------------------------------------------------------\n Package / Vulnerable / Unaffected\n -------------------------------------------------------------------\n 1 dev-libs/openssl \u003c 1.1.1q \u003e= 1.1.1q\n\nDescription\n===========\n\nMultiple vulnerabilities have been discovered in OpenSSL. Please review\nthe CVE identifiers referenced below for details. \n\nImpact\n======\n\nPlease review the referenced CVE identifiers for details. \n\nWorkaround\n==========\n\nThere is no known workaround at this time. \n\nResolution\n==========\n\nAll OpenSSL users should upgrade to the latest version:\n\n # emerge --sync\n # emerge --ask --oneshot --verbose \"\u003e=dev-libs/openssl-1.1.1q\"\n\nReferences\n==========\n\n[ 1 ] CVE-2020-1968\n https://nvd.nist.gov/vuln/detail/CVE-2020-1968\n[ 2 ] CVE-2021-3711\n https://nvd.nist.gov/vuln/detail/CVE-2021-3711\n[ 3 ] CVE-2021-3712\n https://nvd.nist.gov/vuln/detail/CVE-2021-3712\n[ 4 ] CVE-2021-4160\n https://nvd.nist.gov/vuln/detail/CVE-2021-4160\n[ 5 ] CVE-2022-0778\n https://nvd.nist.gov/vuln/detail/CVE-2022-0778\n[ 6 ] CVE-2022-1292\n https://nvd.nist.gov/vuln/detail/CVE-2022-1292\n[ 7 ] CVE-2022-1473\n https://nvd.nist.gov/vuln/detail/CVE-2022-1473\n[ 8 ] CVE-2022-2097\n https://nvd.nist.gov/vuln/detail/CVE-2022-2097\n\nAvailability\n============\n\nThis GLSA and any updates to it are available for viewing at\nthe Gentoo Security Website:\n\n https://security.gentoo.org/glsa/202210-02\n\nConcerns?\n=========\n\nSecurity is a primary focus of Gentoo Linux and ensuring the\nconfidentiality and security of our users\u0027 machines is of utmost\nimportance to us. Any security concerns should be addressed to\nsecurity@gentoo.org or alternatively, you may file a bug at\nhttps://bugs.gentoo.org. \n\nLicense\n=======\n\nCopyright 2022 Gentoo Foundation, Inc; referenced text\nbelongs to its owner(s). \n\nThe contents of this document are licensed under the\nCreative Commons - Attribution / Share Alike license. \n\nhttps://creativecommons.org/licenses/by-sa/2.5\n. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA512\n\n- -------------------------------------------------------------------------\nDebian Security Advisory DSA-5103-1 security@debian.org\nhttps://www.debian.org/security/ Salvatore Bonaccorso\nMarch 15, 2022 https://www.debian.org/security/faq\n- -------------------------------------------------------------------------\n\nPackage : openssl\nCVE ID : CVE-2021-4160 CVE-2022-0778\nDebian Bug : 989604\n\nTavis Ormandy discovered that the BN_mod_sqrt() function of OpenSSL\ncould be tricked into an infinite loop. This could result in denial of\nservice via malformed certificates. \n\nFor the oldstable distribution (buster), this problem has been fixed\nin version 1.1.1d-0+deb10u8. \n\nFor the stable distribution (bullseye), this problem has been fixed in\nversion 1.1.1k-1+deb11u2. \n\nFor the detailed security status of openssl please refer to\nits security tracker page at:\nhttps://security-tracker.debian.org/tracker/openssl\n\nFurther information about Debian Security Advisories, how to apply\nthese updates to your system and frequently asked questions can be\nfound at: https://www.debian.org/security/\n\nMailing list: debian-security-announce@lists.debian.org\n-----BEGIN PGP SIGNATURE-----\n\niQKTBAEBCgB9FiEERkRAmAjBceBVMd3uBUy48xNDz0QFAmIwxQtfFIAAAAAALgAo\naXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDQ2\nNDQ0MDk4MDhDMTcxRTA1NTMxRERFRTA1NENCOEYzMTM0M0NGNDQACgkQBUy48xND\nz0R2qw//c0GbzcbXlLfibf7Nki5CMJUdWqx1si8O2uQ1vKxgC07rCAx1Lrw0TtIl\nTq1vYRtSbvy8P4Qn3E6/lbSYTnM7JbkriZ1HS3Mw4VFlOBA8lWMif4KotrcMAoYE\nIOQlhhTCkKZM8cL4YKDwN7XSy5LSdt/sw5rIi1ZpgVTEXQeKIDPa5WK6YyIGNG6k\nh83TPYZp+8e3Fuoubb8RY5CUfFomdMHRazHcrCkjY+yvFTFdKbUza9RjUs44xu2Z\nZUTfIddR8D8mWfKOyvAVMw0A7/zjFW1IX0vC0RhHwjrulLgJbqWvcYQgEJy/wOKd\ntWjVwGya7+Fxn6GFL0rHZP/OFq9mDwxyBDfDg/hD+TSnbxtyHIxUH4QoWdPPgJxP\nahln2TNfsnQsCopdn9dJ/XOrkC35R7Jp11kmX8MCTP6k8ob4mdQIACcRND/jcPgT\ntOBoUBCrha98Qvdh6UAGegTxqOBaNhG52fpNjEegq/q7kxlugdOtbY1nZXvuHHI5\nC9Gd6e4JqpRlMDuT7rC8qchXJM8VnhWdVdz95gkeQCA21+AGJ+CEvTpSRPY6qCrM\nrUvS3HVrBFNLWNlsA68or3y8CfxjFbpXnSxflCmoBtmAp6z9TXm59Fu7N6Qqkpom\nyV0hQAqqeFa9u3NZKoNrj/FGWYXZ+zMt+jifRLokuB0IhFUOJ70=\n=SB84\n-----END PGP SIGNATURE-----\n. If that applies then:\n\nOpenSSL 1.0.2 users should apply git commit 6fc1aaaf3 (premium support\ncustomers only)\nOpenSSL 1.1.1 users should upgrade to 1.1.1m\nOpenSSL 3.0.0 users should upgrade to 3.0.1\n\nThis issue was found on the 10th of December 2021 and subsequently fixed\nby Bernd Edlinger. \n\nNote\n====\n\nOpenSSL 1.0.2 is out of support and no longer receiving public updates. \n\nReferences\n==========\n\nURL for this Security Advisory:\nhttps://www.openssl.org/news/secadv/20220128.txt\n\nNote: the online version of the advisory may be updated with additional details\nover time. \n\nFor details of OpenSSL severity classifications please see:\nhttps://www.openssl.org/policies/secpolicy.html\n", "sources": [ { "db": "NVD", "id": "CVE-2021-4160" }, { "db": "VULMON", "id": "CVE-2021-4160" }, { "db": "PACKETSTORM", "id": "168714" }, { "db": "PACKETSTORM", "id": "169298" }, { "db": "PACKETSTORM", "id": "169638" } ], "trust": 1.26 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-4160", "trust": 2.0 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.7 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 0.7 }, { "db": "PACKETSTORM", "id": "168714", "trust": 0.7 }, { "db": "CS-HELP", "id": "SB2022062021", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022012811", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022060710", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022031611", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022042517", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022051735", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2512", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2191", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.4616", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2022.2417", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202201-2650", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2021-4160", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169298", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169638", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-4160" }, { "db": "PACKETSTORM", "id": "168714" }, { "db": "PACKETSTORM", "id": "169298" }, { "db": "PACKETSTORM", "id": "169638" }, { "db": "NVD", "id": "CVE-2021-4160" }, { "db": "CNNVD", "id": "CNNVD-202201-2650" } ] }, "id": "VAR-202201-1080", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:16:08.642000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "OpenSSL Fixes for encryption problem vulnerabilities", "trust": 0.6, "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=180884" }, { "title": "Debian Security Advisories: DSA-5103-1 openssl -- security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=debian_security_advisories\u0026qid=4ecbdda56426ff105b6a2939daf5c4e7" }, { "title": "Red Hat: CVE-2021-4160", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2021-4160" }, { "title": "IBM: Security Bulletin: IBM Sterling Control Center vulnerable to multiple issues to due IBM Cognos Analystics (CVE-2022-4160, CVE-2021-3733)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=9d831a6a306a903e583b6a76777d1085" }, { "title": "IBM: Security Bulletin: Vulnerabilities in OpenSSL affect IBM Spectrum Protect Plus SQL, File Indexing, and Windows Host agents", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=316fcbda8419e3988baf55ecd43960a6" }, { "title": "IBM: Security Bulletin: IBM Cognos Analytics has addressed multiple vulnerabilities (CVE-2022-34339, CVE-2021-3712, CVE-2021-3711, CVE-2021-4160, CVE-2021-29425, CVE-2021-3733, CVE-2021-3737, CVE-2022-0391, CVE-2021-43138, CVE-2022-24758)", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=cbece86f0c3bef5a678f2bb3dbbb854b" }, { "title": "", "trust": 0.1, "url": "https://github.com/actions-marketplace-validations/neuvector_scan-action " }, { "title": "", "trust": 0.1, "url": "https://github.com/neuvector/scan-action " }, { "title": "nodejs-helloworld", "trust": 0.1, "url": "https://github.com/andrewd-sysdig/nodejs-helloworld " }, { "title": "", "trust": 0.1, "url": "https://github.com/tianocore-docs/thirdpartysecurityadvisories " } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-4160" }, { "db": "CNNVD", "id": "CNNVD-202201-2650" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "NVD-CWE-noinfo", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2021-4160" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.3, "url": "https://www.oracle.com/security-alerts/cpuapr2022.html" }, { "trust": 1.8, "url": "https://www.openssl.org/news/secadv/20220128.txt" }, { "trust": 1.8, "url": "https://www.debian.org/security/2022/dsa-5103" }, { "trust": 1.8, "url": "https://security.gentoo.org/glsa/202210-02" }, { "trust": 1.7, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.7, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=3bf7b73ea7123045b8f972badc67ed6878e6c37f" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=6fc1aaaf303185aa5e483e06bdfae16daa9193a7" }, { "trust": 1.0, "url": "https://git.openssl.org/gitweb/?p=openssl.git%3ba=commitdiff%3bh=e9e726506cd2a3fd9c0f12daf8cc1fe934c7dddb" }, { "trust": 0.9, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4160" }, { "trust": 0.7, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=6fc1aaaf303185aa5e483e06bdfae16daa9193a7" }, { "trust": 0.7, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=3bf7b73ea7123045b8f972badc67ed6878e6c37f" }, { "trust": 0.7, "url": "https://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=e9e726506cd2a3fd9c0f12daf8cc1fe934c7dddb" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022051735" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2417" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.4616" }, { "trust": 0.6, "url": "https://access.redhat.com/security/cve/cve-2021-4160" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022060710" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/openssl-weak-encryption-via-mips-bn-mod-exp-37400" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2191" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022012811" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022042517" }, { "trust": 0.6, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-22-258-05" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022031611" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022062021" }, { "trust": 0.6, "url": "https://packetstormsecurity.com/files/168714/gentoo-linux-security-advisory-202210-02.html" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2022.2512" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/.html" }, { "trust": 0.1, "url": "https://github.com/actions-marketplace-validations/neuvector_scan-action" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-1968" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3711" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3712" }, { "trust": 0.1, "url": "https://bugs.gentoo.org." }, { "trust": 0.1, "url": "https://security.gentoo.org/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1473" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097" }, { "trust": 0.1, "url": "https://creativecommons.org/licenses/by-sa/2.5" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292" }, { "trust": 0.1, "url": "https://www.debian.org/security/faq" }, { "trust": 0.1, "url": "https://www.debian.org/security/" }, { "trust": 0.1, "url": "https://security-tracker.debian.org/tracker/openssl" }, { "trust": 0.1, "url": "https://www.openssl.org/news/secadv/20220315.txt" }, { "trust": 0.1, "url": "https://www.openssl.org/support/contracts.html" }, { "trust": 0.1, "url": "https://www.openssl.org/policies/secpolicy.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2016-0701" } ], "sources": [ { "db": "VULMON", "id": "CVE-2021-4160" }, { "db": "PACKETSTORM", "id": "168714" }, { "db": "PACKETSTORM", "id": "169298" }, { "db": "PACKETSTORM", "id": "169638" }, { "db": "NVD", "id": "CVE-2021-4160" }, { "db": "CNNVD", "id": "CNNVD-202201-2650" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2021-4160" }, { "db": "PACKETSTORM", "id": "168714" }, { "db": "PACKETSTORM", "id": "169298" }, { "db": "PACKETSTORM", "id": "169638" }, { "db": "NVD", "id": "CVE-2021-4160" }, { "db": "CNNVD", "id": "CNNVD-202201-2650" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-01-28T00:00:00", "db": "VULMON", "id": "CVE-2021-4160" }, { "date": "2022-10-17T13:44:06", "db": "PACKETSTORM", "id": "168714" }, { "date": "2022-03-28T19:12:00", "db": "PACKETSTORM", "id": "169298" }, { "date": "2022-01-28T12:12:12", "db": "PACKETSTORM", "id": "169638" }, { "date": "2022-01-28T22:15:15.133000", "db": "NVD", "id": "CVE-2021-4160" }, { "date": "2022-01-28T00:00:00", "db": "CNNVD", "id": "CNNVD-202201-2650" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-11-09T00:00:00", "db": "VULMON", "id": "CVE-2021-4160" }, { "date": "2023-11-07T03:40:17.080000", "db": "NVD", "id": "CVE-2021-4160" }, { "date": "2022-10-18T00:00:00", "db": "CNNVD", "id": "CNNVD-202201-2650" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202201-2650" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "OpenSSL Input validation error vulnerability", "sources": [ { "db": "CNNVD", "id": "CNNVD-202201-2650" } ], "trust": 0.6 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "input validation error", "sources": [ { "db": "CNNVD", "id": "CNNVD-202201-2650" } ], "trust": 0.6 } }
var-202312-0209
Vulnerability from variot
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). The Web UI of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the server. The server will automatically restart.
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0209", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48429" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2_update_1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48429" } ] }, "cve": "CVE-2023-48429", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "productcert@siemens.com", "availabilityImpact": "LOW", "baseScore": 2.7, "baseSeverity": "LOW", "confidentialityImpact": "NONE", "exploitabilityScore": 1.2, "impactScore": 1.4, "integrityImpact": "NONE", "privilegesRequired": "HIGH", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:L", "version": "3.1" } ], "severity": [ { "author": "productcert@siemens.com", "id": "CVE-2023-48429", "trust": 1.0, "value": "LOW" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48429" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The Web UI of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the server. The server will automatically restart.", "sources": [ { "db": "NVD", "id": "CVE-2023-48429" } ], "trust": 1.0 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "SIEMENS", "id": "SSA-077170", "trust": 1.0 }, { "db": "NVD", "id": "CVE-2023-48429", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48429" } ] }, "id": "VAR-202312-0209", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2023-12-18T11:11:19.708000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-754", "trust": 1.0 } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48429" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.0, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48429" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "NVD", "id": "CVE-2023-48429" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-12-12T12:15:15.083000", "db": "NVD", "id": "CVE-2023-48429" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-12-14T19:37:51.017000", "db": "NVD", "id": "CVE-2023-48429" } ] } }
var-202009-1544
Vulnerability from variot
Protocol encryption can be easily broken for CodeMeter (All versions prior to 6.90 are affected, including Version 6.90 or newer only if CodeMeter Runtime is running as server) and the server accepts external connections, which may allow an attacker to remotely communicate with the CodeMeter API. CodeMeter Contains a cryptographic vulnerability.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants.
Many Siemens products have security vulnerabilities. Attackers can use the vulnerability to communicate with CodeMeter API remotely
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202009-1544", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "codemeter", "scope": "lt", "trust": 1.0, "vendor": "wibu", "version": "6.90" }, { "model": "codemeter", "scope": null, "trust": 0.8, "vendor": "wibu", "version": null }, { "model": "codemeter", "scope": "eq", "trust": 0.8, "vendor": "wibu", "version": null }, { "model": "codemeter", "scope": "eq", "trust": 0.8, "vendor": "wibu", "version": "6.90" }, { "model": "information server sp1", "scope": "lte", "trust": 0.6, "vendor": "siemens", "version": "\u003c=2019" }, { "model": "simatic wincc oa", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.17" }, { "model": "sinec ins", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "sppa-s2000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.04" }, { "model": "sppa-s2000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.06" }, { "model": "sppa-t3000 r8.2 sp2", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "sppa-s3000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.05" }, { "model": "sppa-s3000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.04" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51242" }, { "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "db": "NVD", "id": "CVE-2020-14517" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:wibu:codemeter:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "6.90", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-14517" } ] }, "cve": "CVE-2020-14517", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 7.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "impactScore": 6.4, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 7.5, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2020-14517", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "High", "trust": 0.8, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "CNVD", "availabilityImpact": "COMPLETE", "baseScore": 9.7, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "id": "CNVD-2020-51242", "impactScore": 9.5, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 0.6, "vectorString": "AV:N/AC:L/Au:N/C:P/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 9.8, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2020-14517", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-14517", "trust": 1.8, "value": "CRITICAL" }, { "author": "CNVD", "id": "CNVD-2020-51242", "trust": 0.6, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202009-489", "trust": 0.6, "value": "CRITICAL" } ] } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51242" }, { "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "db": "NVD", "id": "CVE-2020-14517" }, { "db": "CNNVD", "id": "CNNVD-202009-489" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Protocol encryption can be easily broken for CodeMeter (All versions prior to 6.90 are affected, including Version 6.90 or newer only if CodeMeter Runtime is running as server) and the server accepts external connections, which may allow an attacker to remotely communicate with the CodeMeter API. CodeMeter Contains a cryptographic vulnerability.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants. \n\r\n\r\nMany Siemens products have security vulnerabilities. Attackers can use the vulnerability to communicate with CodeMeter API remotely", "sources": [ { "db": "NVD", "id": "CVE-2020-14517" }, { "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "db": "CNVD", "id": "CNVD-2020-51242" } ], "trust": 2.16 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-14517", "trust": 3.8 }, { "db": "ICS CERT", "id": "ICSA-20-203-01", "trust": 2.4 }, { "db": "JVN", "id": "JVNVU90770748", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU94568336", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-011222", "trust": 0.8 }, { "db": "SIEMENS", "id": "SSA-455843", "trust": 0.6 }, { "db": "CNVD", "id": "CNVD-2020-51242", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076.3", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022021806", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202009-489", "trust": 0.6 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51242" }, { "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "db": "NVD", "id": "CVE-2020-14517" }, { "db": "CNNVD", "id": "CNNVD-202009-489" } ] }, "id": "VAR-202009-1544", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "CNVD", "id": "CNVD-2020-51242" } ], "trust": 1.3399059128571427 }, "iot_taxonomy": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "category": [ "ICS" ], "sub_category": null, "trust": 0.6 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51242" } ] }, "last_update_date": "2023-12-18T10:57:30.677000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "CodeMeter", "trust": 0.8, "url": "https://www.wibu.com/products/codemeter.html" }, { "title": "Patch for Vulnerabilities in insufficient encryption strength of many Siemens products", "trust": 0.6, "url": "https://www.cnvd.org.cn/patchinfo/show/233344" }, { "title": "ARC and MATIO Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=127910" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51242" }, { "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "db": "CNNVD", "id": "CNNVD-202009-489" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-327", "trust": 1.0 }, { "problemtype": "Inadequate encryption strength (CWE-326) [ Other ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "db": "NVD", "id": "CVE-2020-14517" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.4, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-20-203-01" }, { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14517" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu94568336/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90770748/" }, { "trust": 0.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-455843.pdf" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/siemens-simatic-six-vulnerabilities-via-wibu-systems-codemeter-runtime-33282" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022021806" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.3/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076/" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51242" }, { "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "db": "NVD", "id": "CVE-2020-14517" }, { "db": "CNNVD", "id": "CNNVD-202009-489" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "CNVD", "id": "CNVD-2020-51242" }, { "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "db": "NVD", "id": "CVE-2020-14517" }, { "db": "CNNVD", "id": "CNNVD-202009-489" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-09-10T00:00:00", "db": "CNVD", "id": "CNVD-2020-51242" }, { "date": "2021-03-24T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "date": "2020-09-16T20:15:13.647000", "db": "NVD", "id": "CVE-2020-14517" }, { "date": "2020-09-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202009-489" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-09-10T00:00:00", "db": "CNVD", "id": "CNVD-2020-51242" }, { "date": "2022-03-15T05:10:00", "db": "JVNDB", "id": "JVNDB-2020-011222" }, { "date": "2021-11-04T18:15:08.017000", "db": "NVD", "id": "CVE-2020-14517" }, { "date": "2022-02-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202009-489" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202009-489" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "CodeMeter\u00a0 Vulnerability in cryptography", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011222" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "encryption problem", "sources": [ { "db": "CNNVD", "id": "CNNVD-202009-489" } ], "trust": 0.6 } }
var-202009-0596
Vulnerability from variot
An attacker could send a specially crafted packet that could have CodeMeter (All versions prior to 7.10) send back packets containing data from the heap. CodeMeter Is vulnerable to an improper shutdown and release of resources.Information may be obtained. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants.
Many Siemens products have security vulnerabilities
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202009-0596", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "codemeter", "scope": "lt", "trust": 1.0, "vendor": "wibu", "version": "7.10" }, { "model": "codemeter", "scope": "eq", "trust": 0.8, "vendor": "wibu", "version": null }, { "model": "codemeter", "scope": "eq", "trust": 0.8, "vendor": "wibu", "version": "7.10" }, { "model": "information server sp1", "scope": "lte", "trust": 0.6, "vendor": "siemens", "version": "\u003c=2019" }, { "model": "simatic wincc oa", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.17" }, { "model": "sinec ins", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "sppa-s2000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.04" }, { "model": "sppa-s2000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.06" }, { "model": "sppa-t3000 r8.2 sp2", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "sppa-s3000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.05" }, { "model": "sppa-s3000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.04" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51240" }, { "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "db": "NVD", "id": "CVE-2020-16233" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:wibu:codemeter:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "7.10", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-16233" } ] }, "cve": "CVE-2020-16233", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.0, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "impactScore": 2.9, "integrityImpact": "NONE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 5.0, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2020-16233", "impactScore": null, "integrityImpact": "None", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.8, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "CNVD", "availabilityImpact": "NONE", "baseScore": 7.8, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 10.0, "id": "CNVD-2020-51240", "impactScore": 6.9, "integrityImpact": "NONE", "severity": "HIGH", "trust": 0.6, "vectorString": "AV:N/AC:L/Au:N/C:C/I:N/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 7.5, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 3.6, "integrityImpact": "NONE", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 7.5, "baseSeverity": "High", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2020-16233", "impactScore": null, "integrityImpact": "None", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-16233", "trust": 1.8, "value": "HIGH" }, { "author": "CNVD", "id": "CNVD-2020-51240", "trust": 0.6, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202009-482", "trust": 0.6, "value": "HIGH" } ] } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51240" }, { "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "db": "NVD", "id": "CVE-2020-16233" }, { "db": "CNNVD", "id": "CNNVD-202009-482" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "An attacker could send a specially crafted packet that could have CodeMeter (All versions prior to 7.10) send back packets containing data from the heap. CodeMeter Is vulnerable to an improper shutdown and release of resources.Information may be obtained. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants. \n\r\n\r\nMany Siemens products have security vulnerabilities", "sources": [ { "db": "NVD", "id": "CVE-2020-16233" }, { "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "db": "CNVD", "id": "CNVD-2020-51240" } ], "trust": 2.16 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-16233", "trust": 3.8 }, { "db": "ICS CERT", "id": "ICSA-20-203-01", "trust": 2.4 }, { "db": "JVN", "id": "JVNVU90770748", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU94568336", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-011224", "trust": 0.8 }, { "db": "SIEMENS", "id": "SSA-455843", "trust": 0.6 }, { "db": "CNVD", "id": "CNVD-2020-51240", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076.3", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022021806", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202009-482", "trust": 0.6 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51240" }, { "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "db": "NVD", "id": "CVE-2020-16233" }, { "db": "CNNVD", "id": "CNNVD-202009-482" } ] }, "id": "VAR-202009-0596", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "CNVD", "id": "CNVD-2020-51240" } ], "trust": 1.3399059128571427 }, "iot_taxonomy": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "category": [ "ICS" ], "sub_category": null, "trust": 0.6 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51240" } ] }, "last_update_date": "2023-12-18T11:03:20.483000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "CodeMeter", "trust": 0.8, "url": "https://www.wibu.com/products/codemeter.html" }, { "title": "Patch for Various Siemens products release improper loopholes", "trust": 0.6, "url": "https://www.cnvd.org.cn/patchinfo/show/233350" }, { "title": "ARC Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=127903" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51240" }, { "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "db": "CNNVD", "id": "CNNVD-202009-482" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-404", "trust": 1.0 }, { "problemtype": "Improper shutdown and release of resources (CWE-404) [ Other ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "db": "NVD", "id": "CVE-2020-16233" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.4, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-20-203-01" }, { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-16233" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu94568336/index.html" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90770748/" }, { "trust": 0.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-455843.pdf" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/siemens-simatic-six-vulnerabilities-via-wibu-systems-codemeter-runtime-33282" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022021806" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.3/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076/" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51240" }, { "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "db": "NVD", "id": "CVE-2020-16233" }, { "db": "CNNVD", "id": "CNNVD-202009-482" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "CNVD", "id": "CNVD-2020-51240" }, { "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "db": "NVD", "id": "CVE-2020-16233" }, { "db": "CNNVD", "id": "CNNVD-202009-482" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-09-09T00:00:00", "db": "CNVD", "id": "CNVD-2020-51240" }, { "date": "2021-03-24T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "date": "2020-09-16T20:15:13.817000", "db": "NVD", "id": "CVE-2020-16233" }, { "date": "2020-09-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202009-482" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-09-10T00:00:00", "db": "CNVD", "id": "CNVD-2020-51240" }, { "date": "2022-03-11T06:04:00", "db": "JVNDB", "id": "JVNDB-2020-011224" }, { "date": "2020-09-18T16:11:42.850000", "db": "NVD", "id": "CVE-2020-16233" }, { "date": "2022-02-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202009-482" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202009-482" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "CodeMeter\u00a0 Improper Resource Shutdown and Release Vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011224" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202009-482" } ], "trust": 0.6 } }
var-202009-1545
Vulnerability from variot
Multiple memory corruption vulnerabilities exist in CodeMeter (All versions prior to 7.10) where the packet parser mechanism does not verify length fields. An attacker could send specially crafted packets to exploit these vulnerabilities. CodeMeter There is a vulnerability in accessing the buffer with an improper length value.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants.
Many Siemens products have memory corruption vulnerabilities
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202009-1545", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "codemeter", "scope": "lt", "trust": 1.0, "vendor": "wibu", "version": "7.10" }, { "model": "codemeter", "scope": "eq", "trust": 0.8, "vendor": "wibu", "version": "7.10" }, { "model": "codemeter", "scope": "eq", "trust": 0.8, "vendor": "wibu", "version": null }, { "model": "information server sp1", "scope": "lte", "trust": 0.6, "vendor": "siemens", "version": "\u003c=2019" }, { "model": "simatic wincc oa", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.17" }, { "model": "sinec ins", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "sppa-s2000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.04" }, { "model": "sppa-s2000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.06" }, { "model": "sppa-t3000 r8.2 sp2", "scope": null, "trust": 0.6, "vendor": "siemens", "version": null }, { "model": "sppa-s3000", "scope": "eq", "trust": 0.6, "vendor": "siemens", "version": "3.05" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51245" }, { "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "db": "NVD", "id": "CVE-2020-14509" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:wibu:codemeter:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "7.10", "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2020-14509" } ] }, "cve": "CVE-2020-14509", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "PARTIAL", "baseScore": 7.5, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 10.0, "impactScore": 6.4, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "Partial", "baseScore": 7.5, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2020-14509", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "High", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:L/Au:N/C:P/I:P/A:P", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "NETWORK", "authentication": "NONE", "author": "CNVD", "availabilityImpact": "COMPLETE", "baseScore": 10.0, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 10.0, "id": "CNVD-2020-51245", "impactScore": 10.0, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 0.6, "vectorString": "AV:N/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 9.8, "baseSeverity": "CRITICAL", "confidentialityImpact": "HIGH", "exploitabilityScore": 3.9, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "NONE", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "High", "baseScore": 9.8, "baseSeverity": "Critical", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2020-14509", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "None", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2020-14509", "trust": 1.8, "value": "CRITICAL" }, { "author": "CNVD", "id": "CNVD-2020-51245", "trust": 0.6, "value": "HIGH" }, { "author": "CNNVD", "id": "CNNVD-202009-491", "trust": 0.6, "value": "CRITICAL" }, { "author": "VULMON", "id": "CVE-2020-14509", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51245" }, { "db": "VULMON", "id": "CVE-2020-14509" }, { "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "db": "NVD", "id": "CVE-2020-14509" }, { "db": "CNNVD", "id": "CNNVD-202009-491" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Multiple memory corruption vulnerabilities exist in CodeMeter (All versions prior to 7.10) where the packet parser mechanism does not verify length fields. An attacker could send specially crafted packets to exploit these vulnerabilities. CodeMeter There is a vulnerability in accessing the buffer with an improper length value.Information is obtained, information is tampered with, and service is disrupted (DoS) It may be put into a state. Siemens SIMATIC WinCC OA (Open Architecture) is a set of SCADA system of Siemens (Siemens), Germany, and it is also an integral part of HMI series. The system is mainly suitable for industries such as rail transit, building automation and public power supply. Information Server is used to report and visualize the process data stored in the Process Historian. SINEC INS is a web-based application that combines various network services in one tool. SPPA-S2000 simulates the automation component (S7) of the nuclear DCS system SPPA-T2000. SPPA-S3000 simulates the automation components of DCS system SPPA-T3000. SPPA-T3000 is a distributed control system, mainly used in fossil and large renewable energy power plants. \n\r\n\r\nMany Siemens products have memory corruption vulnerabilities", "sources": [ { "db": "NVD", "id": "CVE-2020-14509" }, { "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "db": "CNVD", "id": "CNVD-2020-51245" }, { "db": "VULMON", "id": "CVE-2020-14509" } ], "trust": 2.25 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2020-14509", "trust": 3.9 }, { "db": "ICS CERT", "id": "ICSA-20-203-01", "trust": 2.5 }, { "db": "JVN", "id": "JVNVU90770748", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU94568336", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2020-011219", "trust": 0.8 }, { "db": "SIEMENS", "id": "SSA-455843", "trust": 0.6 }, { "db": "CNVD", "id": "CNVD-2020-51245", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076.2", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076.3", "trust": 0.6 }, { "db": "AUSCERT", "id": "ESB-2020.3076", "trust": 0.6 }, { "db": "CS-HELP", "id": "SB2022021806", "trust": 0.6 }, { "db": "CNNVD", "id": "CNNVD-202009-491", "trust": 0.6 }, { "db": "VULMON", "id": "CVE-2020-14509", "trust": 0.1 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51245" }, { "db": "VULMON", "id": "CVE-2020-14509" }, { "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "db": "NVD", "id": "CVE-2020-14509" }, { "db": "CNNVD", "id": "CNNVD-202009-491" } ] }, "id": "VAR-202009-1545", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "CNVD", "id": "CNVD-2020-51245" } ], "trust": 1.3399059128571427 }, "iot_taxonomy": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot_taxonomy#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "category": [ "ICS" ], "sub_category": null, "trust": 0.6 } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51245" } ] }, "last_update_date": "2023-12-18T10:56:30.697000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "CodeMeter", "trust": 0.8, "url": "https://www.wibu.com/products/codemeter.html" }, { "title": "Patch for Memory corruption vulnerabilities in many Siemens products", "trust": 0.6, "url": "https://www.cnvd.org.cn/patchinfo/show/233335" }, { "title": "ARC and MATIO Security vulnerabilities", "trust": 0.6, "url": "http://www.cnnvd.org.cn/web/xxk/bdxqbyid.tag?id=127912" }, { "title": "Siemens Security Advisories: Siemens Security Advisory", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=6161645a91c3d669954a802b5a5a2baf" }, { "title": "Threatpost", "trust": 0.1, "url": "https://threatpost.com/severe-industrial-bugs-takeover-critical-systems/159068/" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51245" }, { "db": "VULMON", "id": "CVE-2020-14509" }, { "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "db": "CNNVD", "id": "CNNVD-202009-491" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "NVD-CWE-Other", "trust": 1.0 }, { "problemtype": "Accessing the buffer with improper length values (CWE-805) [ Other ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "db": "NVD", "id": "CVE-2020-14509" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 2.5, "url": "https://us-cert.cisa.gov/ics/advisories/icsa-20-203-01" }, { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-14509" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu94568336/" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu90770748/" }, { "trust": 0.6, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-455843.pdf" }, { "trust": 0.6, "url": "https://vigilance.fr/vulnerability/siemens-simatic-six-vulnerabilities-via-wibu-systems-codemeter-runtime-33282" }, { "trust": 0.6, "url": "https://www.cybersecurity-help.cz/vdb/sb2022021806" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.2/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076.3/" }, { "trust": 0.6, "url": "https://www.auscert.org.au/bulletins/esb-2020.3076/" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/805.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://exchange.xforce.ibmcloud.com/vulnerabilities/187940" }, { "trust": 0.1, "url": "https://threatpost.com/severe-industrial-bugs-takeover-critical-systems/159068/" } ], "sources": [ { "db": "CNVD", "id": "CNVD-2020-51245" }, { "db": "VULMON", "id": "CVE-2020-14509" }, { "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "db": "NVD", "id": "CVE-2020-14509" }, { "db": "CNNVD", "id": "CNNVD-202009-491" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "CNVD", "id": "CNVD-2020-51245" }, { "db": "VULMON", "id": "CVE-2020-14509" }, { "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "db": "NVD", "id": "CVE-2020-14509" }, { "db": "CNNVD", "id": "CNNVD-202009-491" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-09-10T00:00:00", "db": "CNVD", "id": "CNVD-2020-51245" }, { "date": "2020-09-16T00:00:00", "db": "VULMON", "id": "CVE-2020-14509" }, { "date": "2021-03-24T00:00:00", "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "date": "2020-09-16T20:15:13.380000", "db": "NVD", "id": "CVE-2020-14509" }, { "date": "2020-09-08T00:00:00", "db": "CNNVD", "id": "CNNVD-202009-491" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2020-09-10T00:00:00", "db": "CNVD", "id": "CNVD-2020-51245" }, { "date": "2020-09-22T00:00:00", "db": "VULMON", "id": "CVE-2020-14509" }, { "date": "2022-03-15T05:02:00", "db": "JVNDB", "id": "JVNDB-2020-011219" }, { "date": "2021-11-04T18:22:07.627000", "db": "NVD", "id": "CVE-2020-14509" }, { "date": "2022-02-21T00:00:00", "db": "CNNVD", "id": "CNNVD-202009-491" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "CNNVD", "id": "CNNVD-202009-491" } ], "trust": 0.6 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "CodeMeter\u00a0 Vulnerability in accessing buffers with improper length values in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2020-011219" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "other", "sources": [ { "db": "CNNVD", "id": "CNNVD-202009-491" } ], "trust": 0.6 } }
var-202312-0206
Vulnerability from variot
A vulnerability has been identified in SINEC INS (All versions < V1.0 SP2 Update 2). The radius configuration mechanism of affected products does not correctly check uploaded certificates. A malicious admin could upload a crafted certificate resulting in a denial-of-service condition or potentially issue commands on system level. Siemens' SINEC INS for, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202312-0206", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "sinec ins", "scope": "eq", "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": "1.0" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019616" }, { "db": "NVD", "id": "CVE-2023-48428" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp2_update_1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2023-48428" } ] }, "cve": "CVE-2023-48428", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "productcert@siemens.com", "availabilityImpact": "HIGH", "baseScore": 7.2, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.2, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "HIGH", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "OTHER", "availabilityImpact": "High", "baseScore": 7.2, "baseSeverity": "High", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "JVNDB-2023-019616", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "High", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "productcert@siemens.com", "id": "CVE-2023-48428", "trust": 1.0, "value": "HIGH" }, { "author": "OTHER", "id": "JVNDB-2023-019616", "trust": 0.8, "value": "High" } ] } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019616" }, { "db": "NVD", "id": "CVE-2023-48428" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The radius configuration mechanism of affected products does not correctly check uploaded certificates. A malicious admin could upload a crafted certificate resulting in a denial-of-service condition or potentially issue commands on system level. Siemens\u0027 SINEC INS for, OS A command injection vulnerability exists.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state", "sources": [ { "db": "NVD", "id": "CVE-2023-48428" }, { "db": "JVNDB", "id": "JVNDB-2023-019616" } ], "trust": 1.62 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2023-48428", "trust": 2.6 }, { "db": "SIEMENS", "id": "SSA-077170", "trust": 1.8 }, { "db": "ICS CERT", "id": "ICSA-23-348-16", "trust": 0.8 }, { "db": "JVN", "id": "JVNVU98271228", "trust": 0.8 }, { "db": "JVNDB", "id": "JVNDB-2023-019616", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019616" }, { "db": "NVD", "id": "CVE-2023-48428" } ] }, "id": "VAR-202312-0206", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-01-18T21:45:48.042000Z", "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-78", "trust": 1.0 }, { "problemtype": "OS Command injection (CWE-78) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019616" }, { "db": "NVD", "id": "CVE-2023-48428" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.8, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" }, { "trust": 0.8, "url": "https://jvn.jp/vu/jvnvu98271228/" }, { "trust": 0.8, "url": "https://nvd.nist.gov/vuln/detail/cve-2023-48428" }, { "trust": 0.8, "url": "https://www.cisa.gov/news-events/ics-advisories/icsa-23-348-16" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019616" }, { "db": "NVD", "id": "CVE-2023-48428" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "JVNDB", "id": "JVNDB-2023-019616" }, { "db": "NVD", "id": "CVE-2023-48428" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2024-01-15T00:00:00", "db": "JVNDB", "id": "JVNDB-2023-019616" }, { "date": "2023-12-12T12:15:14.873000", "db": "NVD", "id": "CVE-2023-48428" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2024-01-15T02:20:00", "db": "JVNDB", "id": "JVNDB-2023-019616" }, { "date": "2023-12-14T19:38:27.703000", "db": "NVD", "id": "CVE-2023-48428" } ] }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Siemens\u0027 \u00a0SINEC\u00a0INS\u00a0 In \u00a0OS\u00a0 Command injection vulnerability", "sources": [ { "db": "JVNDB", "id": "JVNDB-2023-019616" } ], "trust": 0.8 } }
cve-2022-45094
Vulnerability from cvelistv5
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-03T14:01:31.530Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "status": "affected", "version": "All versions \u003c V1.0 SP2 Update 1" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially inject commands into the dhcpd configuration of the affected product. An attacker might leverage this to trigger remote code execution on the affected component." } ], "metrics": [ { "cvssV3_1": { "baseScore": 8.4, "baseSeverity": "HIGH", "vectorString": "CVSS:3.1/AV:A/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C", "version": "3.1" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-77", "description": "CWE-77: Improper Neutralization of Special Elements used in a Command (\u0027Command Injection\u0027)", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2023-01-10T11:39:44.116Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2022-45094", "datePublished": "2023-01-10T11:39:44.116Z", "dateReserved": "2022-11-09T14:32:46.476Z", "dateUpdated": "2024-08-03T14:01:31.530Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2024-46890
Vulnerability from cvelistv5
9.4 (Critical) - CVSS:4.0/AV:N/AC:L/AT:N/PR:H/UI:N/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H
{ "containers": { "adp": [ { "affected": [ { "cpes": [ "cpe:2.3:a:seimens:sinec_ins:*:*:*:*:*:*:*:*" ], "defaultStatus": "unknown", "product": "sinec_ins", "vendor": "seimens", "versions": [ { "lessThan": "V1.0 SP2 Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2024-46890", "options": [ { "Exploitation": "none" }, { "Automatable": "no" }, { "Technical Impact": "partial" } ], "role": "CISA Coordinator", "timestamp": "2024-11-12T14:26:52.518770Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2024-11-12T14:28:21.227Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "lessThan": "V1.0 SP2 Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly validate input sent to specific endpoints of its web API. This could allow an authenticated remote attacker with high privileges on the application to execute arbitrary code on the underlying OS." } ], "metrics": [ { "cvssV3_1": { "baseScore": 9.1, "baseSeverity": "CRITICAL", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C", "version": "3.1" } }, { "cvssV4_0": { "baseScore": 9.4, "baseSeverity": "CRITICAL", "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:H/UI:N/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H", "version": "4.0" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-78", "description": "CWE-78: Improper Neutralization of Special Elements used in an OS Command (\u0027OS Command Injection\u0027)", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-11-12T12:49:41.829Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2024-46890", "datePublished": "2024-11-12T12:49:41.829Z", "dateReserved": "2024-09-12T11:24:19.243Z", "dateUpdated": "2024-11-12T14:28:21.227Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2022-45093
Vulnerability from cvelistv5
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-03T14:01:31.489Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "status": "affected", "version": "All versions \u003c V1.0 SP2 Update 1" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product as well as with access to the SFTP server of the affected product (22/tcp), could potentially read and write arbitrary files from and to the device\u0027s file system. An attacker might leverage this to trigger remote code execution on the affected component." } ], "metrics": [ { "cvssV3_1": { "baseScore": 8.5, "baseSeverity": "HIGH", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C", "version": "3.1" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-22", "description": "CWE-22: Improper Limitation of a Pathname to a Restricted Directory (\u0027Path Traversal\u0027)", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2023-01-10T11:39:43.047Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2022-45093", "datePublished": "2023-01-10T11:39:43.047Z", "dateReserved": "2022-11-09T14:32:46.476Z", "dateUpdated": "2024-08-03T14:01:31.489Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2023-48428
Vulnerability from cvelistv5
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-02T21:30:34.959Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "status": "affected", "version": "All versions \u003c V1.0 SP2 Update 2" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The radius configuration mechanism of affected products does not correctly check uploaded certificates. A malicious admin could upload a crafted certificate resulting in a denial-of-service condition or potentially issue commands on system level." } ], "metrics": [ { "cvssV3_1": { "baseScore": 7.2, "baseSeverity": "HIGH", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H/E:P/RL:O/RC:C", "version": "3.1" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-78", "description": "CWE-78: Improper Neutralization of Special Elements used in an OS Command (\u0027OS Command Injection\u0027)", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2023-12-12T11:27:19.590Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2023-48428", "datePublished": "2023-12-12T11:27:19.590Z", "dateReserved": "2023-11-16T16:30:40.849Z", "dateUpdated": "2024-08-02T21:30:34.959Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2024-46888
Vulnerability from cvelistv5
9.4 (Critical) - CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H
{ "containers": { "adp": [ { "affected": [ { "cpes": [ "cpe:2.3:a:seimens:sinec_ins:*:*:*:*:*:*:*:*" ], "defaultStatus": "unknown", "product": "sinec_ins", "vendor": "seimens", "versions": [ { "lessThan": "V1.0 SP2 Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2024-46888", "options": [ { "Exploitation": "none" }, { "Automatable": "no" }, { "Technical Impact": "partial" } ], "role": "CISA Coordinator", "timestamp": "2024-11-12T14:31:00.141310Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2024-11-12T14:32:11.296Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "lessThan": "V1.0 SP2 Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly sanitize user provided paths for SFTP-based file up- and downloads. This could allow an authenticated remote attacker to manipulate arbitrary files on the filesystem and achieve arbitrary code execution on the device." } ], "metrics": [ { "cvssV3_1": { "baseScore": 9.9, "baseSeverity": "CRITICAL", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C", "version": "3.1" } }, { "cvssV4_0": { "baseScore": 9.4, "baseSeverity": "CRITICAL", "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:H/SC:H/SI:H/SA:H", "version": "4.0" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-22", "description": "CWE-22: Improper Limitation of a Pathname to a Restricted Directory (\u0027Path Traversal\u0027)", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-11-12T12:49:39.127Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2024-46888", "datePublished": "2024-11-12T12:49:39.127Z", "dateReserved": "2024-09-12T11:24:19.243Z", "dateUpdated": "2024-11-12T14:32:11.296Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2024-46892
Vulnerability from cvelistv5
6.9 (Medium) - CVSS:4.0/AV:N/AC:L/AT:N/PR:H/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N
{ "containers": { "adp": [ { "metrics": [ { "other": { "content": { "id": "CVE-2024-46892", "options": [ { "Exploitation": "none" }, { "Automatable": "no" }, { "Technical Impact": "partial" } ], "role": "CISA Coordinator", "timestamp": "2024-11-12T14:21:05.449383Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2024-11-12T14:21:32.457Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "lessThan": "V1.0 SP2 Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly invalidate sessions when the associated user is deleted or disabled or their permissions are modified. This could allow an authenticated attacker to continue performing malicious actions even after their user account has been disabled." } ], "metrics": [ { "cvssV3_1": { "baseScore": 4.9, "baseSeverity": "MEDIUM", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:H/A:N/E:P/RL:O/RC:C", "version": "3.1" } }, { "cvssV4_0": { "baseScore": 6.9, "baseSeverity": "MEDIUM", "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:H/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N", "version": "4.0" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-613", "description": "CWE-613: Insufficient Session Expiration", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-11-12T12:49:44.470Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2024-46892", "datePublished": "2024-11-12T12:49:44.470Z", "dateReserved": "2024-09-12T11:24:19.243Z", "dateUpdated": "2024-11-12T14:21:32.457Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2023-48431
Vulnerability from cvelistv5
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-02T21:30:35.087Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "status": "affected", "version": "All versions \u003c V1.0 SP2 Update 2" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). Affected software does not correctly validate the response received by an UMC server. An attacker can use this to crash the affected software by providing and configuring a malicious UMC server or by manipulating the traffic from a legitimate UMC server (i.e. leveraging CVE-2023-48427)." } ], "metrics": [ { "cvssV3_1": { "baseScore": 6.8, "baseSeverity": "MEDIUM", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:N/I:N/A:H/E:P/RL:O/RC:C", "version": "3.1" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-754", "description": "CWE-754: Improper Check for Unusual or Exceptional Conditions", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2023-12-12T11:27:23.326Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2023-48431", "datePublished": "2023-12-12T11:27:23.326Z", "dateReserved": "2023-11-16T16:30:40.850Z", "dateUpdated": "2024-08-02T21:30:35.087Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2022-45092
Vulnerability from cvelistv5
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-03T14:01:31.534Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "status": "affected", "version": "All versions \u003c V1.0 SP2 Update 1" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 1). An authenticated remote attacker with access to the Web Based Management (443/tcp) of the affected product, could potentially read and write arbitrary files from and to the device\u0027s file system. An attacker might leverage this to trigger remote code execution on the affected component." } ], "metrics": [ { "cvssV3_1": { "baseScore": 9.9, "baseSeverity": "CRITICAL", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H/E:P/RL:O/RC:C", "version": "3.1" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-22", "description": "CWE-22: Improper Limitation of a Pathname to a Restricted Directory (\u0027Path Traversal\u0027)", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2023-01-10T11:39:41.994Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-332410.pdf" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2022-45092", "datePublished": "2023-01-10T11:39:41.994Z", "dateReserved": "2022-11-09T14:32:46.476Z", "dateUpdated": "2024-08-03T14:01:31.534Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2024-46891
Vulnerability from cvelistv5
6.9 (Medium) - CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:N
{ "containers": { "adp": [ { "affected": [ { "cpes": [ "cpe:2.3:a:seimens:sinec_ins:*:*:*:*:*:*:*:*" ], "defaultStatus": "unknown", "product": "sinec_ins", "vendor": "seimens", "versions": [ { "lessThan": "V1.0_SP2_Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2024-46891", "options": [ { "Exploitation": "none" }, { "Automatable": "yes" }, { "Technical Impact": "partial" } ], "role": "CISA Coordinator", "timestamp": "2024-11-12T14:22:45.870908Z", "version": "2.0.3" }, "type": "ssvc" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-125", "description": "CWE-125 Out-of-bounds Read", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-11-12T14:25:48.481Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "lessThan": "V1.0 SP2 Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly restrict the size of generated log files. This could allow an unauthenticated remote attacker to trigger a large amount of logged events to exhaust the system\u0027s resources and create a denial of service condition." } ], "metrics": [ { "cvssV3_1": { "baseScore": 5.3, "baseSeverity": "MEDIUM", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L/E:P/RL:O/RC:C", "version": "3.1" } }, { "cvssV4_0": { "baseScore": 6.9, "baseSeverity": "MEDIUM", "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:N", "version": "4.0" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-400", "description": "CWE-400: Uncontrolled Resource Consumption", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-11-12T12:49:43.155Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2024-46891", "datePublished": "2024-11-12T12:49:43.155Z", "dateReserved": "2024-09-12T11:24:19.243Z", "dateUpdated": "2024-11-12T14:25:48.481Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2023-48430
Vulnerability from cvelistv5
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-02T21:30:35.228Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "status": "affected", "version": "All versions \u003c V1.0 SP2 Update 2" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The REST API of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the API. The server will automatically restart." } ], "metrics": [ { "cvssV3_1": { "baseScore": 2.7, "baseSeverity": "LOW", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:L/E:P/RL:O/RC:C", "version": "3.1" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-392", "description": "CWE-392: Missing Report of Error Condition", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2023-12-12T11:27:22.091Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2023-48430", "datePublished": "2023-12-12T11:27:22.091Z", "dateReserved": "2023-11-16T16:30:40.849Z", "dateUpdated": "2024-08-02T21:30:35.228Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2024-46894
Vulnerability from cvelistv5
5.3 (Medium) - CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:L/VI:L/VA:L/SC:N/SI:N/SA:N
{ "containers": { "adp": [ { "affected": [ { "cpes": [ "cpe:2.3:a:siemens:sinec_ins:-:*:*:*:*:*:*:*" ], "defaultStatus": "unknown", "product": "sinec_ins", "vendor": "siemens", "versions": [ { "lessThan": "v1.0_sp2_update_3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2024-46894", "options": [ { "Exploitation": "none" }, { "Automatable": "yes" }, { "Technical Impact": "partial" } ], "role": "CISA Coordinator", "timestamp": "2024-11-12T14:16:33.854628Z", "version": "2.0.3" }, "type": "ssvc" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-276", "description": "CWE-276 Incorrect Default Permissions", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-11-12T14:19:46.429Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "lessThan": "V1.0 SP2 Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application does not properly validate authorization of a user to query the \"/api/sftp/users\" endpoint. This could allow an authenticated remote attacker to gain knowledge about the list of configured users of the SFTP service and also modify that configuration." } ], "metrics": [ { "cvssV3_1": { "baseScore": 6.3, "baseSeverity": "MEDIUM", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:L/I:L/A:L/E:P/RL:O/RC:C", "version": "3.1" } }, { "cvssV4_0": { "baseScore": 5.3, "baseSeverity": "MEDIUM", "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:L/VI:L/VA:L/SC:N/SI:N/SA:N", "version": "4.0" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-200", "description": "CWE-200: Exposure of Sensitive Information to an Unauthorized Actor", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-11-12T12:49:45.831Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2024-46894", "datePublished": "2024-11-12T12:49:45.831Z", "dateReserved": "2024-09-12T11:26:58.816Z", "dateUpdated": "2024-11-12T14:19:46.429Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2023-48429
Vulnerability from cvelistv5
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-02T21:30:35.075Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "status": "affected", "version": "All versions \u003c V1.0 SP2 Update 2" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). The Web UI of affected devices does not check the length of parameters in certain conditions. This allows a malicious admin to crash the server by sending a crafted request to the server. The server will automatically restart." } ], "metrics": [ { "cvssV3_1": { "baseScore": 2.7, "baseSeverity": "LOW", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:N/I:N/A:L/E:P/RL:O/RC:C", "version": "3.1" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-394", "description": "CWE-394: Unexpected Status Code or Return Value", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2023-12-12T11:27:20.840Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2023-48429", "datePublished": "2023-12-12T11:27:20.840Z", "dateReserved": "2023-11-16T16:30:40.849Z", "dateUpdated": "2024-08-02T21:30:35.075Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2024-46889
Vulnerability from cvelistv5
6.9 (Medium) - CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:L/VI:N/VA:N/SC:N/SI:N/SA:N
{ "containers": { "adp": [ { "affected": [ { "cpes": [ "cpe:2.3:a:seimens:sinec_ins:*:*:*:*:*:*:*:*" ], "defaultStatus": "unknown", "product": "sinec_ins", "vendor": "seimens", "versions": [ { "lessThan": "V1.0 SP2 Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "metrics": [ { "other": { "content": { "id": "CVE-2024-46889", "options": [ { "Exploitation": "none" }, { "Automatable": "yes" }, { "Technical Impact": "partial" } ], "role": "CISA Coordinator", "timestamp": "2024-11-12T14:29:00.705847Z", "version": "2.0.3" }, "type": "ssvc" } } ], "providerMetadata": { "dateUpdated": "2024-11-12T14:30:25.375Z", "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0", "shortName": "CISA-ADP" }, "title": "CISA ADP Vulnrichment" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "lessThan": "V1.0 SP2 Update 3", "status": "affected", "version": "0", "versionType": "custom" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 3). The affected application uses hard-coded cryptographic key material to obfuscate configuration files. This could allow an attacker to learn that cryptographic key material through reverse engineering of the application binary and decrypt arbitrary backup files." } ], "metrics": [ { "cvssV3_1": { "baseScore": 5.3, "baseSeverity": "MEDIUM", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N/E:P/RL:O/RC:C", "version": "3.1" } }, { "cvssV4_0": { "baseScore": 6.9, "baseSeverity": "MEDIUM", "vectorString": "CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:L/VI:N/VA:N/SC:N/SI:N/SA:N", "version": "4.0" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-321", "description": "CWE-321: Use of Hard-coded Cryptographic Key", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2024-11-12T12:49:40.474Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/html/ssa-915275.html" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2024-46889", "datePublished": "2024-11-12T12:49:40.474Z", "dateReserved": "2024-09-12T11:24:19.243Z", "dateUpdated": "2024-11-12T14:30:25.375Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }
cve-2023-48427
Vulnerability from cvelistv5
{ "containers": { "adp": [ { "providerMetadata": { "dateUpdated": "2024-08-02T21:30:35.359Z", "orgId": "af854a3a-2127-422b-91ae-364da2661108", "shortName": "CVE" }, "references": [ { "tags": [ "x_transferred" ], "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ], "title": "CVE Program Container" } ], "cna": { "affected": [ { "defaultStatus": "unknown", "product": "SINEC INS", "vendor": "Siemens", "versions": [ { "status": "affected", "version": "All versions \u003c V1.0 SP2 Update 2" } ] } ], "descriptions": [ { "lang": "en", "value": "A vulnerability has been identified in SINEC INS (All versions \u003c V1.0 SP2 Update 2). Affected products do not properly validate the certificate of the configured UMC server. This could allow an attacker to intercept credentials that are sent to the UMC server as well as to manipulate responses, potentially allowing an attacker to escalate privileges." } ], "metrics": [ { "cvssV3_1": { "baseScore": 8.1, "baseSeverity": "HIGH", "vectorString": "CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:H/I:H/A:H/E:P/RL:O/RC:C", "version": "3.1" } } ], "problemTypes": [ { "descriptions": [ { "cweId": "CWE-295", "description": "CWE-295: Improper Certificate Validation", "lang": "en", "type": "CWE" } ] } ], "providerMetadata": { "dateUpdated": "2023-12-12T11:27:18.362Z", "orgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "shortName": "siemens" }, "references": [ { "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-077170.pdf" } ] } }, "cveMetadata": { "assignerOrgId": "cec7a2ec-15b4-4faf-bd53-b40f371f3a77", "assignerShortName": "siemens", "cveId": "CVE-2023-48427", "datePublished": "2023-12-12T11:27:18.362Z", "dateReserved": "2023-11-16T16:30:40.849Z", "dateUpdated": "2024-08-02T21:30:35.359Z", "state": "PUBLISHED" }, "dataType": "CVE_RECORD", "dataVersion": "5.1" }