var-202201-0349
Vulnerability from variot
node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor. node-fetch Exists in an open redirect vulnerability.Information may be obtained and information may be tampered with. The purpose of this text-only errata is to inform you about the security issues fixed in this release. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
====================================================================
Red Hat Security Advisory
Synopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update Advisory ID: RHSA-2022:6156-01 Product: RHODF Advisory URL: https://access.redhat.com/errata/RHSA-2022:6156 Issue date: 2022-08-24 CVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528 CVE-2022-0235 CVE-2022-0536 CVE-2022-0670 CVE-2022-1292 CVE-2022-1586 CVE-2022-1650 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-22576 CVE-2022-23772 CVE-2022-23773 CVE-2022-23806 CVE-2022-24675 CVE-2022-24771 CVE-2022-24772 CVE-2022-24773 CVE-2022-24785 CVE-2022-24921 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-28327 CVE-2022-29526 CVE-2022-29810 CVE-2022-29824 CVE-2022-31129 ==================================================================== 1. Summary:
Updated images that include numerous enhancements, security, and bug fixes are now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat Enterprise Linux 8.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Description:
Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Container Platform. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform. In addition to persistent storage, Red Hat OpenShift Data Foundation provisions a multicloud data management service with an S3 compatible API.
Security Fix(es):
-
eventsource: Exposure of Sensitive Information (CVE-2022-1650)
-
moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)
-
nodejs-set-value: type confusion allows bypass of CVE-2019-10747 (CVE-2021-23440)
-
nanoid: Information disclosure via valueOf() function (CVE-2021-23566)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
follow-redirects: Exposure of Sensitive Information via Authorization Header leak (CVE-2022-0536)
-
prometheus/client_golang: Denial of service using InstrumentHandlerCounter (CVE-2022-21698)
-
golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString (CVE-2022-23772)
-
golang: cmd/go: misinterpretation of branch names can lead to incorrect access control (CVE-2022-23773)
-
golang: crypto/elliptic: IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)
-
node-forge: Signature verification leniency in checking
digestAlgorithm
structure can lead to signature forgery (CVE-2022-24771) -
node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery (CVE-2022-24772)
-
node-forge: Signature verification leniency in checking
DigestInfo
structure (CVE-2022-24773) -
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: regexp: stack exhaustion via a deeply nested expression (CVE-2022-24921)
-
golang: crypto/elliptic: panic caused by oversized scalar (CVE-2022-28327)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
These updated images include numerous enhancements and bug fixes. Space precludes documenting all of these changes in this advisory. Users are directed to the Red Hat OpenShift Data Foundation Release Notes for information on the most significant of these changes:
https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
All Red Hat OpenShift Data Foundation users are advised to upgrade to these updated images, which provide numerous bug fixes and enhancements.
- Solution:
Before applying this update, make sure all previously released errata relevant to your system have been applied. For details on how to apply this update, refer to: https://access.redhat.com/articles/11258
- Bugs fixed (https://bugzilla.redhat.com/):
1937117 - Deletion of StorageCluster doesn't remove ceph toolbox pod
1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified
1973317 - libceph: read_partial_message and bad crc/signature errors
1996829 - Permissions assigned to ceph auth principals when using external storage are too broad
2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747
2027724 - Warning log for rook-ceph-toolbox in ocs-operator log
2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor
2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter
2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm
2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function
2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]
2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak
2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements
2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString
2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control
2056697 - odf-csi-addons-operator subscription failed while using custom catalog source
2058211 - Add validation for CIDR field in DRPolicy
2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced
2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10
2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE
2063691 - [GSS] [RFE] Add termination policy to s3 route
2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint
2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression
2066514 - OCS operator to install Ceph prometheus alerts instead of Rook
2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route
2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking digestAlgorithm
structure can lead to signature forgery
2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery
2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking DigestInfo
structure
2069314 - OCS external mode should allow specifying names for all Ceph auth principals
2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster.
2069812 - must-gather: rbd_vol_and_snap_info collection is broken
2069815 - must-gather: essential rbd mirror command outputs aren't collected
2070542 - After creating a new storage system it redirects to 404 error page instead of the "StorageSystems" page for OCP 4.11
2071494 - [DR] Applications are not getting deployed
2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale
2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty
2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled
2075426 - 4.10 must gather is not available after GA of 4.10
2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in "Progressing" state although all the openshift-storage pods are up and Running
2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost
2077242 - vg-manager missing permissions
2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode
2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar
2079866 - [DR] odf-multicluster-console is in CLBO state
2079873 - csi-nfsplugin pods are not coming up after successful patch request to update "ROOK_CSI_ENABLE_NFS": "true"'
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2081680 - Add the LVM Operator into the Storage category in OperatorHub
2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation
2082078 - OBC's not getting created on primary cluster when manageds3 set as "true" for mirrorPeer
2082497 - Do not filter out removable devices
2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)
2083441 - LVM operator should deploy the volumesnapshotclass resource
2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status
2083993 - Add missing pieces for storageclassclaim
2084041 - [Console Migration] Link-able storage system name directs to blank page
2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group
2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided"
2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates
2084546 - [Console Migration] Provider details absent under backing store in UI
2084565 - [Console Migration] The creation of new backing store , directs to a blank page
2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information
2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred
2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace
2086557 - Thin pool in lvm operator doesn't use all disks
2086675 - [UI]No option to "add capacity" via the Installed Operators tab
2086982 - ODF 4.11 deployment is failing
2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm
2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and 'Overview' tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown
2087107 - Set default storage class if none is set
2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message
2087675 - ocs-metrics-exporter pod crashes on odf v4.11
2087732 - [Console Migration] Events page missing under new namespace store
2087755 - [Console Migration] Bucket Class details page doesn't have the complete details in UI
2088359 - Send VG Metrics even if storage is being consumed from thinPool alone
2088380 - KMS using vault on standalone MCG cluster is not enabled
2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint
2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook
2089296 - [MS v2] Storage cluster in error phase and 'ocs-provider-qe' addon installation failed with ODF 4.10.2
2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts
2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9.
2089552 - [MS v2] Cannot create StorageClassClaim
2089567 - [Console Migration] Improve the styling of Various Components
2089786 - [Console Migration] "Attach to deployment" option is missing in kebab menu for Object Bucket Claims .
2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket.
2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed
2090278 - [LVMO] Some containers are missing resource requirements and limits
2090314 - [LVMO] CSV is missing some useful annotations
2090953 - [MCO] DRCluster created under default namespace
2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics
2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool.
2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference
2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external
2091894 - All backingstores in cluster spontaneously change their own secret
2091951 - [GSS] OCS pods are restarting due to liveness probe failure
2091998 - Volume Snapshots not work with external restricted mode
2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool
2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks
2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)
2092349 - Enable zeroing on the thin-pool during creation
2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase
2092400 - [MS v2] StorageClassClaim creation is failing with error "no StorageCluster found"
2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically
2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected
2094179 - MCO fails to create DRClusters when replication mode is synchronous
2094853 - [Console Migration] Description under storage class drop down in add capacity is missing .
2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2095155 - Use tool black
to format the python external script
2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster
2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time
2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page
2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened
2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False
2096937 - Storage - Data Foundation: i18n misses
2097216 - Collect StorageClassClaim details in must-gather
2097287 - [UI] Dropdown doesn't close on it's own after arbiter zone selection on 'Capacity and nodes' page
2097305 - Add translations for ODF 4.11
2098121 - Managed ODF not getting detected
2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment
2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount
2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled
2099581 - StorageClassClaim with encryption gets into Failed state
2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project
2099646 - Block pool list page kebab action menu is showing empty options
2099660 - OCS dashbaords not appearing unless user clicks on "Overview" Tab
2099724 - S3 secret namespace on the managed cluster doesn't match with the namespace in the s3profile
2099965 - rbd: provide option to disable setting metadata on RBD images
2100326 - [ODF to ODF] Volume snapshot creation failed
2100352 - Make lvmo pod labels more uniform
2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary
2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install
2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection
2103818 - Restored snapshot don't have any content
2104833 - Need to update configmap for IBM storage odf operator GA
2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS
- References:
https://access.redhat.com/security/cve/CVE-2021-23440 https://access.redhat.com/security/cve/CVE-2021-23566 https://access.redhat.com/security/cve/CVE-2021-40528 https://access.redhat.com/security/cve/CVE-2022-0235 https://access.redhat.com/security/cve/CVE-2022-0536 https://access.redhat.com/security/cve/CVE-2022-0670 https://access.redhat.com/security/cve/CVE-2022-1292 https://access.redhat.com/security/cve/CVE-2022-1586 https://access.redhat.com/security/cve/CVE-2022-1650 https://access.redhat.com/security/cve/CVE-2022-1785 https://access.redhat.com/security/cve/CVE-2022-1897 https://access.redhat.com/security/cve/CVE-2022-1927 https://access.redhat.com/security/cve/CVE-2022-2068 https://access.redhat.com/security/cve/CVE-2022-2097 https://access.redhat.com/security/cve/CVE-2022-21698 https://access.redhat.com/security/cve/CVE-2022-22576 https://access.redhat.com/security/cve/CVE-2022-23772 https://access.redhat.com/security/cve/CVE-2022-23773 https://access.redhat.com/security/cve/CVE-2022-23806 https://access.redhat.com/security/cve/CVE-2022-24675 https://access.redhat.com/security/cve/CVE-2022-24771 https://access.redhat.com/security/cve/CVE-2022-24772 https://access.redhat.com/security/cve/CVE-2022-24773 https://access.redhat.com/security/cve/CVE-2022-24785 https://access.redhat.com/security/cve/CVE-2022-24921 https://access.redhat.com/security/cve/CVE-2022-25313 https://access.redhat.com/security/cve/CVE-2022-25314 https://access.redhat.com/security/cve/CVE-2022-27774 https://access.redhat.com/security/cve/CVE-2022-27776 https://access.redhat.com/security/cve/CVE-2022-27782 https://access.redhat.com/security/cve/CVE-2022-28327 https://access.redhat.com/security/cve/CVE-2022-29526 https://access.redhat.com/security/cve/CVE-2022-29810 https://access.redhat.com/security/cve/CVE-2022-29824 https://access.redhat.com/security/cve/CVE-2022-31129 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy Xo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N b3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY sJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj gz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am 2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf mX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl CSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE hTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY 3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E 2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V ERCapzwcRuQ=gV+z -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Description:
Node.js is a software development platform for building fast and scalable network applications in the JavaScript programming language.
The following packages have been upgraded to a later upstream version: nodejs (14.21.3). Bugs fixed (https://bugzilla.redhat.com/):
2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names 2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection 2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields 2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2066009 - CVE-2021-44906 minimist: prototype pollution 2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields 2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function 2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address 2142822 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.6.0.z] 2150323 - CVE-2022-24999 express: "qs" prototype poisoning causes the hang of the node process 2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service 2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability 2168631 - CVE-2022-4904 c-ares: buffer overflow in config_sortlist() due to missing string length check 2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS 2171935 - CVE-2023-23918 Node.js: Permissions policies can be bypassed via process.mainModule 2172217 - CVE-2023-23920 Node.js: insecure loading of ICU data through ICU_DATA environment variable 2175827 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.6.0.z]
-
Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
-
========================================================================== Ubuntu Security Notice USN-6158-1 June 13, 2023
node-fetch vulnerability
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
- Ubuntu 18.04 LTS (Available with Ubuntu Pro)
Summary:
Node Fetch could be made to expose sensitive information if it opened a specially crafted file.
Software Description: - node-fetch: A light-weight module that brings the Fetch API to Node.js
Details:
It was discovered that Node Fetch incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to obtain sensitive information.
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: node-fetch 1.7.3-2ubuntu0.1
Ubuntu 18.04 LTS (Available with Ubuntu Pro): node-fetch 1.7.3-1ubuntu0.1~esm1
In general, a standard system update will make all the necessary changes. Summary:
The Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console 2040693 - ?Replication repository? wizard has no validation for name length 2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com? 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace 2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. 2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade 2061335 - [MTC UI] ?Update cluster? button is not getting disabled 2062266 - MTC UI does not display logs properly [OADP-BL] 2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend 2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2076593 - Velero pod log missing from UI drop down 2076599 - Velero pod log missing from downloaded logs folder [OADP-BL] 2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan 2079252 - [MTC] Rsync options logs not visible in log-reader pod 2082221 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [UI] 2082225 - non-numeric user when launching stage pods [OADP-BL] 2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments 2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods 2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels 2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL] 2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts 2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL] 2096939 - Fix legacy operator.yml inconsistencies and errors 2100486 - [MTC UI] Target storage class field is not getting respected when clusters don't have replication repo configured. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.5.0 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/
Security fixes:
-
nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)
-
containerd: Unprivileged pod may bind mount any privileged regular file on disk (CVE-2021-43816)
-
minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)
-
openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates (CVE-2022-0778)
-
imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path (CVE-2022-24778)
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
nconf: Prototype pollution in memory store (CVE-2022-21803)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
-
opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)
Bug fixes:
-
RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target (BZ# 2014557)
-
RHACM 2.5.0 images (BZ# 2024938)
-
[UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?) (BZ#2028348)
-
Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly (BZ# 2028647)
-
create cluster pool -> choose infra type, As a result infra providers disappear from UI. (BZ# 2033339)
-
Restore/backup shows up as Validation failed but the restore backup status in ACM shows success (BZ# 2034279)
-
Observability - OCP 311 node role are not displayed completely (BZ# 2038650)
-
Documented uninstall procedure leaves many leftovers (BZ# 2041921)
-
infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5 (BZ# 2046554)
-
Acm failed to install due to some missing CRDs in operator (BZ# 2047463)
-
Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)
-
ACM home page now includes /home/ in url (BZ# 2051299)
-
proxy heading in Add Credential should be capitalized (BZ# 2051349)
-
ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0 (BZ# 2051983)
-
Create Policy button does not work and user cannot use console to create policy (BZ# 2053264)
-
No cluster information was displayed after a policyset was created (BZ# 2053366)
-
Dynamic plugin update does not take effect in Firefox (BZ# 2053516)
-
Replicated policy should not be available when creating a Policy Set (BZ# 2054431)
-
Placement section in Policy Set wizard does not reset when users click "Back" to re-configured placement (BZ# 2054433)
-
Solution:
For Red Hat Advanced Cluster Management for Kubernetes, see the following documentation, which will be updated shortly for this release, for important instructions on installing this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing
- Bugs fixed (https://bugzilla.redhat.com/):
2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target
2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability
2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion
2028224 - RHACM 2.5.0 images
2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)
2028647 - Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly
2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic
2033339 - create cluster pool -> choose infra type , As a result infra providers disappear from UI.
2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub
2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO
2073355 - Get blank page when click policy with unknown status in Governance -> Overview page
2073508 - Thread responsible to get insights data from ks clusters is broken
2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters
2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology
2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin
2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of "resource conflict" error
2074178 - Editing Helm Argo Applications does not Prune Old Resources
2074626 - Policy placement failure during ZTP SNO scale test
2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store
2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal
2074937 - UI allows creating cluster even when there are no ClusterImageSets
2075416 - infraEnv failed to create image after restore
2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod
2075739 - The lookup function won't check the referred resource whether exist when using template policies
2076421 - Can't select existing placement for policy or policyset when editing policy or policyset
2076494 - No policyreport CR for spoke clusters generated in the disconnected env
2076502 - The policyset card doesn't show the cluster status(violation/without violation) again after deleted one policy
2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator
2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster
2077291 - Prometheus doesn't display acm_managed_cluster_info after upgrade from 2.4 to 2.5
2077304 - Create Cluster button is disabled only if other clusters exist
2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5
2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI
2077751 - Can't create a template policy from UI when the object's name is referring Golang text template syntax in this policy
2077783 - Still show violation for clusterserviceversions after enforced "Detect Image vulnerabilities " policy template and the operator is installed
2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set
2078164 - Failed to edit a policy without placement
2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement
2078373 - Disable the hyperlink of ks node in standalone MCE environment since the search component was not exists
2078617 - Azure public credential details get pre-populated with base domain name in UI
2078952 - View pod logs in search details returns error
2078973 - Crashed pod is marked with success in Topology
2079013 - Changing existing placement rules does not change YAML file
2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM
2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on
2079494 - Hitting Enter in yaml editor caused unexpected keys "key00x:" to be created
2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5
2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page
2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted
2079615 - Edit appset placement in UI with a new placement throws error upon submitting
2079658 - Cluster Count is Incorrect in Application UI
2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower
2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters
2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role
2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses
2080503 - vSphere network name doesn't allow entering spaces and doesn't reflect YAML changes
2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page
2080712 - Select an existing placement configuration does not work
2080776 - Unrecognized characters are displayed on policy and policy set yaml editors
2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster
2081810 - Type '-' character in Name field caused previously typed character backspaced in in the name field of policy wizard
2081829 - Application deployed on local cluster's topology is crashing after upgrade
2081938 - The deleted policy still be shown on the policyset review page when edit this policy set
2082226 - Object Storage Topology includes residue of resources after Upgrade
2082409 - Policy set details panel remains even after the policy set has been deleted
2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets
2083038 - Warning still refers to the klusterlet-addon-appmgr
pod rather than the application-manager
pod
2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated
2083434 - The provider-credential-controller did not support the RHV credentials type
2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace
2083870 - When editing an existing application and refreshing the Select an existing placement configuration
, multiple occurrences of the placementrule gets displayed
2084034 - The status message looks messy in the policy set card, suggest one kind status one a row
2084158 - Support provisioning bm cluster where no provisioning network provided
2084622 - Local Helm application shows cluster resources as Not Deployed
in Topology [Upgrade]
2085083 - Policies fail to copy to cluster namespace after ACM upgrade
2085237 - Resources referenced by a channel are not annotated with backup label
2085273 - Error querying for ansible job in app topology
2085281 - Template name error is reported but the template name was found in a different replicated policy
2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page
2087515 - Validation thrown out in configuration for disconnect install while creating bm credential
2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade]
2088511 - Some cluster resources are not showing labels that are defined in the YAML
- It increases application response times and allows for dramatically improving performance while providing availability, reliability, and elastic scale. Find out more about Data Grid 8.4.0 in the Release Notes[3].
Security Fix(es):
-
prismjs: improperly escaped output allows a XSS (CVE-2022-23647)
-
snakeyaml: Denial of Service due to missing nested depth limitation for collections (CVE-2022-25857)
-
node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)
-
netty: world readable temporary file containing sensitive data (CVE-2022-24823)
-
snakeyaml: Uncaught exception in org.yaml.snakeyaml.composer.Composer.composeSequenceNode (CVE-2022-38749)
-
snakeyaml: Uncaught exception in org.yaml.snakeyaml.constructor.BaseConstructor.constructObject (CVE-2022-38750)
-
snakeyaml: Uncaught exception in java.base/java.util.regex.Pattern$Ques.match (CVE-2022-38751)
-
snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode (CVE-2022-38752)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:
To install this update, do the following:
- Download the Data Grid 8.4.0 Server patch from the customer portal[²]. Back up your existing Data Grid installation. You should back up databases, configuration files, and so on. Install the Data Grid 8.4.0 Server patch. Restart Data Grid to ensure the changes take effect.
For more information about Data Grid 8.4.0, refer to the 8.4.0 Release Notes[³]
- Bugs fixed (https://bugzilla.redhat.com/):
2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2056643 - CVE-2022-23647 prismjs: improperly escaped output allows a XSS 2087186 - CVE-2022-24823 netty: world readable temporary file containing sensitive data 2126789 - CVE-2022-25857 snakeyaml: Denial of Service due to missing nested depth limitation for collections 2129706 - CVE-2022-38749 snakeyaml: Uncaught exception in org.yaml.snakeyaml.composer.Composer.composeSequenceNode 2129707 - CVE-2022-38750 snakeyaml: Uncaught exception in org.yaml.snakeyaml.constructor.BaseConstructor.constructObject 2129709 - CVE-2022-38751 snakeyaml: Uncaught exception in java.base/java.util.regex.Pattern$Ques.match 2129710 - CVE-2022-38752 snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202201-0349", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "node-fetch", "scope": "lt", "trust": 1.0, "vendor": "node fetch", "version": "3.1.1" }, { "model": "node-fetch", "scope": "gte", "trust": 1.0, "vendor": "node fetch", "version": "3.0.0" }, { "model": "node-fetch", "scope": "lt", "trust": 1.0, "vendor": "node fetch", "version": "2.6.7" }, { "model": "sinec ins", "scope": "eq", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "sinec ins", "scope": "lt", "trust": 1.0, "vendor": "siemens", "version": "1.0" }, { "model": "sinec ins", "scope": null, "trust": 0.8, "vendor": "\u30b7\u30fc\u30e1\u30f3\u30b9", "version": null }, { "model": "node-fetch", "scope": null, "trust": 0.8, "vendor": "node fetch \u30d7\u30ed\u30b8\u30a7\u30af\u30c8", "version": null }, { "model": "gnu/linux", "scope": null, "trust": 0.8, "vendor": "debian", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:node-fetch_project:node-fetch:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "2.6.7", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:node-fetch_project:node-fetch:*:*:*:*:*:node.js:*:*", "cpe_name": [], "versionEndExcluding": "3.1.1", "versionStartIncluding": "3.0.0", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:sp1:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "1.0", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:siemens:sinec_ins:1.0:-:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2022-0235" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" } ], "trust": 0.8 }, "cve": "CVE-2022-0235", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "MEDIUM", "accessVector": "NETWORK", "authentication": "NONE", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 5.8, "confidentialityImpact": "PARTIAL", "exploitabilityScore": 8.6, "impactScore": 4.9, "integrityImpact": "PARTIAL", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "MEDIUM", "trust": 1.0, "userInteractionRequired": true, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Medium", "accessVector": "Network", "authentication": "None", "author": "NVD", "availabilityImpact": "None", "baseScore": 5.8, "confidentialityImpact": "Partial", "exploitabilityScore": null, "id": "CVE-2022-0235", "impactScore": null, "integrityImpact": "Partial", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "Medium", "trust": 0.9, "userInteractionRequired": null, "vectorString": "AV:N/AC:M/Au:N/C:P/I:P/A:N", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "NVD", "availabilityImpact": "NONE", "baseScore": 6.1, "baseSeverity": "MEDIUM", "confidentialityImpact": "LOW", "exploitabilityScore": 2.8, "impactScore": 2.7, "integrityImpact": "LOW", "privilegesRequired": "NONE", "scope": "CHANGED", "trust": 1.0, "userInteraction": "REQUIRED", "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N", "version": "3.1" }, { "attackComplexity": "LOW", "attackVector": "NETWORK", "author": "security@huntr.dev", "availabilityImpact": "HIGH", "baseScore": 8.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 2.8, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" }, { "attackComplexity": "Low", "attackVector": "Network", "author": "NVD", "availabilityImpact": "None", "baseScore": 6.1, "baseSeverity": "Medium", "confidentialityImpact": "Low", "exploitabilityScore": null, "id": "CVE-2022-0235", "impactScore": null, "integrityImpact": "Low", "privilegesRequired": "None", "scope": "Changed", "trust": 0.8, "userInteraction": "Required", "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2022-0235", "trust": 1.8, "value": "MEDIUM" }, { "author": "security@huntr.dev", "id": "CVE-2022-0235", "trust": 1.0, "value": "HIGH" }, { "author": "VULMON", "id": "CVE-2022-0235", "trust": 0.1, "value": "MEDIUM" } ] } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "NVD", "id": "CVE-2022-0235" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "node-fetch is vulnerable to Exposure of Sensitive Information to an Unauthorized Actor. node-fetch Exists in an open redirect vulnerability.Information may be obtained and information may be tampered with. The purpose of this text-only\nerrata is to inform you about the security issues fixed in this release. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n==================================================================== \nRed Hat Security Advisory\n\nSynopsis: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update\nAdvisory ID: RHSA-2022:6156-01\nProduct: RHODF\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:6156\nIssue date: 2022-08-24\nCVE Names: CVE-2021-23440 CVE-2021-23566 CVE-2021-40528\n CVE-2022-0235 CVE-2022-0536 CVE-2022-0670\n CVE-2022-1292 CVE-2022-1586 CVE-2022-1650\n CVE-2022-1785 CVE-2022-1897 CVE-2022-1927\n CVE-2022-2068 CVE-2022-2097 CVE-2022-21698\n CVE-2022-22576 CVE-2022-23772 CVE-2022-23773\n CVE-2022-23806 CVE-2022-24675 CVE-2022-24771\n CVE-2022-24772 CVE-2022-24773 CVE-2022-24785\n CVE-2022-24921 CVE-2022-25313 CVE-2022-25314\n CVE-2022-27774 CVE-2022-27776 CVE-2022-27782\n CVE-2022-28327 CVE-2022-29526 CVE-2022-29810\n CVE-2022-29824 CVE-2022-31129\n====================================================================\n1. Summary:\n\nUpdated images that include numerous enhancements, security, and bug fixes\nare now available for Red Hat OpenShift Data Foundation 4.11.0 on Red Hat\nEnterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Description:\n\nRed Hat OpenShift Data Foundation is software-defined storage integrated\nwith and optimized for the Red Hat OpenShift Container Platform. Red Hat\nOpenShift Data Foundation is a highly scalable, production-grade persistent\nstorage for stateful applications running in the Red Hat OpenShift\nContainer Platform. In addition to persistent storage, Red Hat OpenShift\nData Foundation provisions a multicloud data management service with an S3\ncompatible API. \n\nSecurity Fix(es):\n\n* eventsource: Exposure of Sensitive Information (CVE-2022-1650)\n\n* moment: inefficient parsing algorithm resulting in DoS (CVE-2022-31129)\n\n* nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n(CVE-2021-23440)\n\n* nanoid: Information disclosure via valueOf() function (CVE-2021-23566)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* follow-redirects: Exposure of Sensitive Information via Authorization\nHeader leak (CVE-2022-0536)\n\n* prometheus/client_golang: Denial of service using\nInstrumentHandlerCounter (CVE-2022-21698)\n\n* golang: math/big: uncontrolled memory consumption due to an unhandled\noverflow via Rat.SetString (CVE-2022-23772)\n\n* golang: cmd/go: misinterpretation of branch names can lead to incorrect\naccess control (CVE-2022-23773)\n\n* golang: crypto/elliptic: IsOnCurve returns true for invalid field\nelements (CVE-2022-23806)\n\n* golang: encoding/pem: fix stack overflow in Decode (CVE-2022-24675)\n\n* node-forge: Signature verification leniency in checking `digestAlgorithm`\nstructure can lead to signature forgery (CVE-2022-24771)\n\n* node-forge: Signature verification failing to check tailing garbage bytes\ncan lead to signature forgery (CVE-2022-24772)\n\n* node-forge: Signature verification leniency in checking `DigestInfo`\nstructure (CVE-2022-24773)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: regexp: stack exhaustion via a deeply nested expression\n(CVE-2022-24921)\n\n* golang: crypto/elliptic: panic caused by oversized scalar\n(CVE-2022-28327)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\nThese updated images include numerous enhancements and bug fixes. Space\nprecludes documenting all of these changes in this advisory. Users are\ndirected to the Red Hat OpenShift Data Foundation Release Notes for\ninformation on the most significant of these changes:\n\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\nAll Red Hat OpenShift Data Foundation users are advised to upgrade to these\nupdated images, which provide numerous bug fixes and enhancements. \n\n3. Solution:\n\nBefore applying this update, make sure all previously released errata\nrelevant to your system have been applied. For details on how to apply this\nupdate, refer to: https://access.redhat.com/articles/11258\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937117 - Deletion of StorageCluster doesn\u0027t remove ceph toolbox pod\n1947482 - The device replacement process when deleting the volume metadata need to be fixed or modified\n1973317 - libceph: read_partial_message and bad crc/signature errors\n1996829 - Permissions assigned to ceph auth principals when using external storage are too broad\n2004944 - CVE-2021-23440 nodejs-set-value: type confusion allows bypass of CVE-2019-10747\n2027724 - Warning log for rook-ceph-toolbox in ocs-operator log\n2029298 - [GSS] Noobaa is not compatible with aws bucket lifecycle rule creation policies\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2045880 - CVE-2022-21698 prometheus/client_golang: Denial of service using InstrumentHandlerCounter\n2047173 - [RFE] Change controller-manager pod name in odf-lvm-operator to more relevant name to lvm\n2050853 - CVE-2021-23566 nanoid: Information disclosure via valueOf() function\n2050897 - CVE-2022-0235 mcg-core-container: node-fetch: exposure of sensitive information to an unauthorized actor [openshift-data-foundation-4]\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2053429 - CVE-2022-23806 golang: crypto/elliptic: IsOnCurve returns true for invalid field elements\n2053532 - CVE-2022-23772 golang: math/big: uncontrolled memory consumption due to an unhandled overflow via Rat.SetString\n2053541 - CVE-2022-23773 golang: cmd/go: misinterpretation of branch names can lead to incorrect access control\n2056697 - odf-csi-addons-operator subscription failed while using custom catalog source\n2058211 - Add validation for CIDR field in DRPolicy\n2060487 - [ODF to ODF MS] Consumer lost connection to provider API if the endpoint node is powered off/replaced\n2060790 - ODF under Storage missing for OCP 4.11 + ODF 4.10\n2061713 - [KMS] The error message during creation of encrypted PVC mentions the parameter in UPPER_CASE\n2063691 - [GSS] [RFE] Add termination policy to s3 route\n2064426 - [GSS][External Mode] exporter python script does not support FQDN for RGW endpoint\n2064857 - CVE-2022-24921 golang: regexp: stack exhaustion via a deeply nested expression\n2066514 - OCS operator to install Ceph prometheus alerts instead of Rook\n2067079 - [GSS] [RFE] Add termination policy to ocs-storagecluster-cephobjectstore route\n2067387 - CVE-2022-24771 node-forge: Signature verification leniency in checking `digestAlgorithm` structure can lead to signature forgery\n2067458 - CVE-2022-24772 node-forge: Signature verification failing to check tailing garbage bytes can lead to signature forgery\n2067461 - CVE-2022-24773 node-forge: Signature verification leniency in checking `DigestInfo` structure\n2069314 - OCS external mode should allow specifying names for all Ceph auth principals\n2069319 - [RFE] OCS CephFS External Mode Multi-tenancy. Add cephfs subvolumegroup and path= caps per cluster. \n2069812 - must-gather: rbd_vol_and_snap_info collection is broken\n2069815 - must-gather: essential rbd mirror command outputs aren\u0027t collected\n2070542 - After creating a new storage system it redirects to 404 error page instead of the \"StorageSystems\" page for OCP 4.11\n2071494 - [DR] Applications are not getting deployed\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2073920 - rook osd prepare failed with this error - failed to set kek as an environment variable: key encryption key is empty\n2074810 - [Tracker for Bug 2074585] MCG standalone deployment page goes blank when the KMS option is enabled\n2075426 - 4.10 must gather is not available after GA of 4.10\n2075581 - [IBM Z] : ODF 4.11.0-38 deployment leaves the storagecluster in \"Progressing\" state although all the openshift-storage pods are up and Running\n2076457 - After node replacement[provider], connection issue between consumer and provider if the provider node which was referenced MON-endpoint configmap (on consumer) is lost\n2077242 - vg-manager missing permissions\n2077688 - CVE-2022-24675 golang: encoding/pem: fix stack overflow in Decode\n2077689 - CVE-2022-28327 golang: crypto/elliptic: panic caused by oversized scalar\n2079866 - [DR] odf-multicluster-console is in CLBO state\n2079873 - csi-nfsplugin pods are not coming up after successful patch request to update \"ROOK_CSI_ENABLE_NFS\": \"true\"\u0027\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2081680 - Add the LVM Operator into the Storage category in OperatorHub\n2082028 - UI does not have the option to configure capacity, security and networks,etc. during storagesystem creation\n2082078 - OBC\u0027s not getting created on primary cluster when manageds3 set as \"true\" for mirrorPeer\n2082497 - Do not filter out removable devices\n2083074 - [Tracker for Ceph BZ #2086419] Two Ceph mons crashed in ceph-16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)\n2083441 - LVM operator should deploy the volumesnapshotclass resource\n2083953 - [Tracker for Ceph BZ #2084579] PVC created with ocs-storagecluster-ceph-nfs storageclass is moving to pending status\n2083993 - Add missing pieces for storageclassclaim\n2084041 - [Console Migration] Link-able storage system name directs to blank page\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2084201 - MCG operator pod is stuck in a CrashLoopBackOff; Panic Attack: [] an empty namespace may not be set when a resource name is provided\"\n2084503 - CLI falsely flags unique PVPool backingstore secrets as duplicates\n2084546 - [Console Migration] Provider details absent under backing store in UI\n2084565 - [Console Migration] The creation of new backing store , directs to a blank page\n2085307 - CVE-2022-1650 eventsource: Exposure of Sensitive Information\n2085351 - [DR] Mirrorpeer failed to create with msg Internal error occurred\n2085357 - [DR] When drpolicy is create drcluster resources are getting created under default namespace\n2086557 - Thin pool in lvm operator doesn\u0027t use all disks\n2086675 - [UI]No option to \"add capacity\" via the Installed Operators tab\n2086982 - ODF 4.11 deployment is failing\n2086983 - [odf-clone] Mons IP not updated correctly in the rook-ceph-mon-endpoints cm\n2087078 - [RDR] [UI] Multiple instances of Object Bucket, Object Bucket Claims and \u0027Overview\u0027 tab is present under Storage section on the Hub cluster when navigated back from the Managed cluster using the Hybrid console dropdown\n2087107 - Set default storage class if none is set\n2087237 - [UI] After clicking on Create StorageSystem, it navigates to Storage Systems tab but shows an error message\n2087675 - ocs-metrics-exporter pod crashes on odf v4.11\n2087732 - [Console Migration] Events page missing under new namespace store\n2087755 - [Console Migration] Bucket Class details page doesn\u0027t have the complete details in UI\n2088359 - Send VG Metrics even if storage is being consumed from thinPool alone\n2088380 - KMS using vault on standalone MCG cluster is not enabled\n2088506 - ceph-external-cluster-details-exporter.py should not accept hostname for rgw-endpoint\n2088587 - Removal of external storage system with misconfigured cephobjectstore fails on noobaa webhook\n2089296 - [MS v2] Storage cluster in error phase and \u0027ocs-provider-qe\u0027 addon installation failed with ODF 4.10.2\n2089342 - prometheus pod goes into OOMKilled state during ocs-osd-controller-manager pod restarts\n2089397 - [GSS]OSD pods CLBO after upgrade to 4.10 from 4.9. \n2089552 - [MS v2] Cannot create StorageClassClaim\n2089567 - [Console Migration] Improve the styling of Various Components\n2089786 - [Console Migration] \"Attach to deployment\" option is missing in kebab menu for Object Bucket Claims . \n2089795 - [Console Migration] Yaml and Events page is missing for Object Bucket Claims and Object Bucket. \n2089797 - [RDR] rbd image failed to mount with msg rbd error output: rbd: sysfs write failed\n2090278 - [LVMO] Some containers are missing resource requirements and limits\n2090314 - [LVMO] CSV is missing some useful annotations\n2090953 - [MCO] DRCluster created under default namespace\n2091487 - [Hybrid Console] Multicluster dashboard is not displaying any metrics\n2091638 - [Console Migration] Yaml page is missing for existing and newly created Block pool. \n2091641 - MCG operator pod is stuck in a CrashLoopBackOff; MapSecretToNamespaceStores invalid memory address or nil pointer dereference\n2091681 - Auto replication policy type detection is not happneing on DRPolicy creation page when ceph cluster is external\n2091894 - All backingstores in cluster spontaneously change their own secret\n2091951 - [GSS] OCS pods are restarting due to liveness probe failure\n2091998 - Volume Snapshots not work with external restricted mode\n2092143 - Deleting a CephBlockPool CR does not delete the underlying Ceph pool\n2092217 - [External] UI for uploding JSON data for external cluster connection has some strict checks\n2092220 - [Tracker for Ceph BZ #2096882] CephNFS is not reaching to Ready state on ODF on IBM Power (ppc64le)\n2092349 - Enable zeroing on the thin-pool during creation\n2092372 - [MS v2] StorageClassClaim is not reaching Ready Phase\n2092400 - [MS v2] StorageClassClaim creation is failing with error \"no StorageCluster found\"\n2093266 - [RDR] When mirroring is enabled rbd mirror daemon restart config should be enabled automatically\n2093848 - Note about token for encrypted PVCs should be removed when only cluster wide encryption checkbox is selected\n2094179 - MCO fails to create DRClusters when replication mode is synchronous\n2094853 - [Console Migration] Description under storage class drop down in add capacity is missing . \n2094856 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2095155 - Use tool `black` to format the python external script\n2096209 - ReclaimSpaceJob fails on OCP 4.11 + ODF 4.10 cluster\n2096414 - Compression status for cephblockpool is reported as Enabled and Disabled at the same time\n2096509 - [Console Migration] Unable to select Storage Class in Object Bucket Claim creation page\n2096513 - Infinite BlockPool tabs get created when the StorageSystem details page is opened\n2096823 - After upgrading the cluster from ODF4.10 to ODF4.11, the ROOK_CSI_ENABLE_CEPHFS move to False\n2096937 - Storage - Data Foundation: i18n misses\n2097216 - Collect StorageClassClaim details in must-gather\n2097287 - [UI] Dropdown doesn\u0027t close on it\u0027s own after arbiter zone selection on \u0027Capacity and nodes\u0027 page\n2097305 - Add translations for ODF 4.11\n2098121 - Managed ODF not getting detected\n2098261 - Remove BlockPools(no use case) and Object(redundat with Overview) tab on the storagesystem page for NooBaa only and remove BlockPools tab for External mode deployment\n2098536 - [KMS] PVC creation using vaulttenantsa method is failing due to token secret missing in serviceaccount\n2099265 - [KMS] The storagesystem creation page goes blank when KMS is enabled\n2099581 - StorageClassClaim with encryption gets into Failed state\n2099609 - The red-hat-storage/topolvm release-4.11 needs to be synced with the upstream project\n2099646 - Block pool list page kebab action menu is showing empty options\n2099660 - OCS dashbaords not appearing unless user clicks on \"Overview\" Tab\n2099724 - S3 secret namespace on the managed cluster doesn\u0027t match with the namespace in the s3profile\n2099965 - rbd: provide option to disable setting metadata on RBD images\n2100326 - [ODF to ODF] Volume snapshot creation failed\n2100352 - Make lvmo pod labels more uniform\n2100946 - Avoid temporary ceph health alert for new clusters where the insecure global id is allowed longer than necessary\n2101139 - [Tracker for OCP BZ #2102782] topolvm-controller get into CrashLoopBackOff few minutes after install\n2101380 - Default backingstore is rejected with message INVALID_SCHEMA_PARAMS SERVER account_api#/methods/check_external_connection\n2103818 - Restored snapshot don\u0027t have any content\n2104833 - Need to update configmap for IBM storage odf operator GA\n2105075 - CVE-2022-31129 moment: inefficient parsing algorithm resulting in DoS\n\n5. References:\n\nhttps://access.redhat.com/security/cve/CVE-2021-23440\nhttps://access.redhat.com/security/cve/CVE-2021-23566\nhttps://access.redhat.com/security/cve/CVE-2021-40528\nhttps://access.redhat.com/security/cve/CVE-2022-0235\nhttps://access.redhat.com/security/cve/CVE-2022-0536\nhttps://access.redhat.com/security/cve/CVE-2022-0670\nhttps://access.redhat.com/security/cve/CVE-2022-1292\nhttps://access.redhat.com/security/cve/CVE-2022-1586\nhttps://access.redhat.com/security/cve/CVE-2022-1650\nhttps://access.redhat.com/security/cve/CVE-2022-1785\nhttps://access.redhat.com/security/cve/CVE-2022-1897\nhttps://access.redhat.com/security/cve/CVE-2022-1927\nhttps://access.redhat.com/security/cve/CVE-2022-2068\nhttps://access.redhat.com/security/cve/CVE-2022-2097\nhttps://access.redhat.com/security/cve/CVE-2022-21698\nhttps://access.redhat.com/security/cve/CVE-2022-22576\nhttps://access.redhat.com/security/cve/CVE-2022-23772\nhttps://access.redhat.com/security/cve/CVE-2022-23773\nhttps://access.redhat.com/security/cve/CVE-2022-23806\nhttps://access.redhat.com/security/cve/CVE-2022-24675\nhttps://access.redhat.com/security/cve/CVE-2022-24771\nhttps://access.redhat.com/security/cve/CVE-2022-24772\nhttps://access.redhat.com/security/cve/CVE-2022-24773\nhttps://access.redhat.com/security/cve/CVE-2022-24785\nhttps://access.redhat.com/security/cve/CVE-2022-24921\nhttps://access.redhat.com/security/cve/CVE-2022-25313\nhttps://access.redhat.com/security/cve/CVE-2022-25314\nhttps://access.redhat.com/security/cve/CVE-2022-27774\nhttps://access.redhat.com/security/cve/CVE-2022-27776\nhttps://access.redhat.com/security/cve/CVE-2022-27782\nhttps://access.redhat.com/security/cve/CVE-2022-28327\nhttps://access.redhat.com/security/cve/CVE-2022-29526\nhttps://access.redhat.com/security/cve/CVE-2022-29810\nhttps://access.redhat.com/security/cve/CVE-2022-29824\nhttps://access.redhat.com/security/cve/CVE-2022-31129\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index\n\n6. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYwZpHdzjgjWX9erEAQgy1Q//QaStGj34eQ0ap5J5gCcC1lTv7U908fNy\nXo7VvwAi67IslacAiQhWNyhg+jr1c46Op7kAAC04f8n25IsM+7xYYyieJ0YDAP7N\nb3iySRKnPI6I9aJlN0KMm7J1jfjFmcuPMrUdDHiSGNsmK9zLmsQs3dGMaCqYX+fY\nsJEDPnMMulbkrPLTwSG2IEcpqGH2BoEYwPhSblt2fH0Pv6H7BWYF/+QjxkGOkGDj\ngz0BBnc1Foir2BpYKv6/+3FUbcXFdBXmrA5BIcZ9157Yw3RP/khf+lQ6I1KYX1Am\n2LI6/6qL8HyVWyl+DEUz0DxoAQaF5x61C35uENyh/U96sYeKXtP9rvDC41TvThhf\nmX4woWcUN1euDfgEF22aP9/gy+OsSyfP+SV0d9JKIaM9QzCCOwyKcIM2+CeL4LZl\nCSAYI7M+cKsl1wYrioNBDdG8H54GcGV8kS1Hihb+Za59J7pf/4IPuHy3Cd6FBymE\nhTFLE9YGYeVtCufwdTw+4CEjB2jr3WtzlYcSc26SET9aPCoTUmS07BaIAoRmzcKY\n3KKSKi3LvW69768OLQt8UT60WfQ7zHa+OWuEp1tVoXe/XU3je42yuptCd34axn7E\n2gtZJOocJxL2FtehhxNTx7VI3Bjy2V0VGlqqf1t6/z6r0IOhqxLbKeBvH9/XF/6V\nERCapzwcRuQ=gV+z\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Description:\n\nNode.js is a software development platform for building fast and scalable\nnetwork applications in the JavaScript programming language. \n\nThe following packages have been upgraded to a later upstream version:\nnodejs (14.21.3). Bugs fixed (https://bugzilla.redhat.com/):\n\n2040839 - CVE-2021-44531 nodejs: Improper handling of URI Subject Alternative Names\n2040846 - CVE-2021-44532 nodejs: Certificate Verification Bypass via String Injection\n2040856 - CVE-2021-44533 nodejs: Incorrect handling of certificate subject and issuer fields\n2040862 - CVE-2022-21824 nodejs: Prototype pollution via console.table properties\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2066009 - CVE-2021-44906 minimist: prototype pollution\n2130518 - CVE-2022-35256 nodejs: HTTP Request Smuggling due to incorrect parsing of header fields\n2134609 - CVE-2022-3517 nodejs-minimatch: ReDoS via the braceExpand function\n2140911 - CVE-2022-43548 nodejs: DNS rebinding in inspect via invalid octal IP address\n2142822 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.6.0.z]\n2150323 - CVE-2022-24999 express: \"qs\" prototype poisoning causes the hang of the node process\n2156324 - CVE-2021-35065 glob-parent: Regular Expression Denial of Service\n2165824 - CVE-2022-25881 http-cache-semantics: Regular Expression Denial of Service (ReDoS) vulnerability\n2168631 - CVE-2022-4904 c-ares: buffer overflow in config_sortlist() due to missing string length check\n2170644 - CVE-2022-38900 decode-uri-component: improper input validation resulting in DoS\n2171935 - CVE-2023-23918 Node.js: Permissions policies can be bypassed via process.mainModule\n2172217 - CVE-2023-23920 Node.js: insecure loading of ICU data through ICU_DATA environment variable\n2175827 - nodejs:14/nodejs: Rebase to the latest Nodejs 14 release [rhel-8] [rhel-8.6.0.z]\n\n6. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. ==========================================================================\nUbuntu Security Notice USN-6158-1\nJune 13, 2023\n\nnode-fetch vulnerability\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n- Ubuntu 18.04 LTS (Available with Ubuntu Pro)\n\nSummary:\n\nNode Fetch could be made to expose sensitive information if it opened a\nspecially crafted file. \n\nSoftware Description:\n- node-fetch: A light-weight module that brings the Fetch API to Node.js\n\nDetails:\n\nIt was discovered that Node Fetch incorrectly handled certain inputs. If a\nuser or an automated system were tricked into opening a specially crafted\ninput file, a remote attacker could possibly use this issue to obtain\nsensitive information. \n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n node-fetch 1.7.3-2ubuntu0.1\n\nUbuntu 18.04 LTS (Available with Ubuntu Pro):\n node-fetch 1.7.3-1ubuntu0.1~esm1\n\nIn general, a standard system update will make all the necessary changes. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console\n2040693 - ?Replication repository? wizard has no validation for name length\n2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com?\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace\n2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. \n2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade\n2061335 - [MTC UI] ?Update cluster? button is not getting disabled\n2062266 - MTC UI does not display logs properly [OADP-BL]\n2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend\n2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2076593 - Velero pod log missing from UI drop down\n2076599 - Velero pod log missing from downloaded logs folder [OADP-BL]\n2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan\n2079252 - [MTC] Rsync options logs not visible in log-reader pod\n2082221 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [UI]\n2082225 - non-numeric user when launching stage pods [OADP-BL]\n2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments\n2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods\n2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels\n2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL]\n2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts\n2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL]\n2096939 - Fix legacy operator.yml inconsistencies and errors\n2100486 - [MTC UI] Target storage class field is not getting respected when clusters don\u0027t have replication repo configured. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.5.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes: \n\n* nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)\n\n* containerd: Unprivileged pod may bind mount any privileged regular file\non disk (CVE-2021-43816)\n\n* minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)\n\n* openssl: Infinite loop in BN_mod_sqrt() reachable when parsing\ncertificates (CVE-2022-0778)\n\n* imgcrypt: Unauthorized access to encryted container image on a shared\nsystem due to missing check in CheckAuthorization() code path\n(CVE-2022-24778)\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nBug fixes:\n\n* RFE Copy secret with specific secret namespace, name for source and name,\nnamespace and cluster label for target (BZ# 2014557)\n\n* RHACM 2.5.0 images (BZ# 2024938)\n\n* [UI] When you delete host agent from infraenv no confirmation message\nappear (Are you sure you want to delete x?) (BZ#2028348)\n\n* Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller\nnot working properly (BZ# 2028647)\n\n* create cluster pool -\u003e choose infra type, As a result infra providers\ndisappear from UI. (BZ# 2033339)\n\n* Restore/backup shows up as Validation failed but the restore backup\nstatus in ACM shows success (BZ# 2034279)\n\n* Observability - OCP 311 node role are not displayed completely (BZ#\n2038650)\n\n* Documented uninstall procedure leaves many leftovers (BZ# 2041921)\n\n* infrastructure-operator pod crashes due to insufficient privileges in ACM\n2.5 (BZ# 2046554)\n\n* Acm failed to install due to some missing CRDs in operator (BZ# 2047463)\n\n* Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)\n\n* ACM home page now includes /home/ in url (BZ# 2051299)\n\n* proxy heading in Add Credential should be capitalized (BZ# 2051349)\n\n* ACM 2.5 tries to create new MCE instance when install on top of existing\nMCE 2.0 (BZ# 2051983)\n\n* Create Policy button does not work and user cannot use console to create\npolicy (BZ# 2053264)\n\n* No cluster information was displayed after a policyset was created (BZ#\n2053366)\n\n* Dynamic plugin update does not take effect in Firefox (BZ# 2053516)\n\n* Replicated policy should not be available when creating a Policy Set (BZ#\n2054431)\n\n* Placement section in Policy Set wizard does not reset when users click\n\"Back\" to re-configured placement (BZ# 2054433)\n\n3. Solution:\n\nFor Red Hat Advanced Cluster Management for Kubernetes, see the following\ndocumentation, which will be updated shortly for this release, for\nimportant\ninstructions on installing this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2028224 - RHACM 2.5.0 images\n2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)\n2028647 - Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller not working properly\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2033339 - create cluster pool -\u003e choose infra type , As a result infra providers disappear from UI. \n2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub\n2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO\n2073355 - Get blank page when click policy with unknown status in Governance -\u003e Overview page\n2073508 - Thread responsible to get insights data from *ks clusters is broken\n2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters\n2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology\n2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin\n2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of \"resource conflict\" error\n2074178 - Editing Helm Argo Applications does not Prune Old Resources\n2074626 - Policy placement failure during ZTP SNO scale test\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal\n2074937 - UI allows creating cluster even when there are no ClusterImageSets\n2075416 - infraEnv failed to create image after restore\n2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod\n2075739 - The lookup function won\u0027t check the referred resource whether exist when using template policies\n2076421 - Can\u0027t select existing placement for policy or policyset when editing policy or policyset\n2076494 - No policyreport CR for spoke clusters generated in the disconnected env\n2076502 - The policyset card doesn\u0027t show the cluster status(violation/without violation) again after deleted one policy\n2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator\n2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster\n2077291 - Prometheus doesn\u0027t display acm_managed_cluster_info after upgrade from 2.4 to 2.5\n2077304 - Create Cluster button is disabled only if other clusters exist\n2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5\n2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI\n2077751 - Can\u0027t create a template policy from UI when the object\u0027s name is referring Golang text template syntax in this policy\n2077783 - Still show violation for clusterserviceversions after enforced \"Detect Image vulnerabilities \" policy template and the operator is installed\n2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set\n2078164 - Failed to edit a policy without placement\n2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement\n2078373 - Disable the hyperlink of *ks node in standalone MCE environment since the search component was not exists\n2078617 - Azure public credential details get pre-populated with base domain name in UI\n2078952 - View pod logs in search details returns error\n2078973 - Crashed pod is marked with success in Topology\n2079013 - Changing existing placement rules does not change YAML file\n2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM\n2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on\n2079494 - Hitting Enter in yaml editor caused unexpected keys \"key00x:\" to be created\n2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5\n2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page\n2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted\n2079615 - Edit appset placement in UI with a new placement throws error upon submitting\n2079658 - Cluster Count is Incorrect in Application UI\n2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower\n2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters\n2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080503 - vSphere network name doesn\u0027t allow entering spaces and doesn\u0027t reflect YAML changes\n2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page\n2080712 - Select an existing placement configuration does not work\n2080776 - Unrecognized characters are displayed on policy and policy set yaml editors\n2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster\n2081810 - Type \u0027-\u0027 character in Name field caused previously typed character backspaced in in the name field of policy wizard\n2081829 - Application deployed on local cluster\u0027s topology is crashing after upgrade\n2081938 - The deleted policy still be shown on the policyset review page when edit this policy set\n2082226 - Object Storage Topology includes residue of resources after Upgrade\n2082409 - Policy set details panel remains even after the policy set has been deleted\n2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets\n2083038 - Warning still refers to the `klusterlet-addon-appmgr` pod rather than the `application-manager` pod\n2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated\n2083434 - The provider-credential-controller did not support the RHV credentials type\n2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace\n2083870 - When editing an existing application and refreshing the `Select an existing placement configuration`, multiple occurrences of the placementrule gets displayed\n2084034 - The status message looks messy in the policy set card, suggest one kind status one a row\n2084158 - Support provisioning bm cluster where no provisioning network provided\n2084622 - Local Helm application shows cluster resources as `Not Deployed` in Topology [Upgrade]\n2085083 - Policies fail to copy to cluster namespace after ACM upgrade\n2085237 - Resources referenced by a channel are not annotated with backup label\n2085273 - Error querying for ansible job in app topology\n2085281 - Template name error is reported but the template name was found in a different replicated policy\n2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page\n2087515 - Validation thrown out in configuration for disconnect install while creating bm credential\n2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade]\n2088511 - Some cluster resources are not showing labels that are defined in the YAML\n\n5. \nIt increases application response times and allows for dramatically\nimproving performance while providing availability, reliability, and\nelastic scale. Find out more about Data Grid 8.4.0 in the Release Notes[3]. \n\nSecurity Fix(es):\n\n* prismjs: improperly escaped output allows a XSS (CVE-2022-23647)\n\n* snakeyaml: Denial of Service due to missing nested depth limitation for\ncollections (CVE-2022-25857)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* netty: world readable temporary file containing sensitive data\n(CVE-2022-24823)\n\n* snakeyaml: Uncaught exception in\norg.yaml.snakeyaml.composer.Composer.composeSequenceNode (CVE-2022-38749)\n\n* snakeyaml: Uncaught exception in\norg.yaml.snakeyaml.constructor.BaseConstructor.constructObject\n(CVE-2022-38750)\n\n* snakeyaml: Uncaught exception in\njava.base/java.util.regex.Pattern$Ques.match (CVE-2022-38751)\n\n* snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode\n(CVE-2022-38752)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. Solution:\n\nTo install this update, do the following:\n \n1. Download the Data Grid 8.4.0 Server patch from the customer portal[\u00b2]. Back up your existing Data Grid installation. You should back up\ndatabases, configuration files, and so on. Install the Data Grid 8.4.0 Server patch. Restart Data Grid to ensure the changes take effect. \n\nFor more information about Data Grid 8.4.0, refer to the 8.4.0 Release\nNotes[\u00b3]\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2056643 - CVE-2022-23647 prismjs: improperly escaped output allows a XSS\n2087186 - CVE-2022-24823 netty: world readable temporary file containing sensitive data\n2126789 - CVE-2022-25857 snakeyaml: Denial of Service due to missing nested depth limitation for collections\n2129706 - CVE-2022-38749 snakeyaml: Uncaught exception in org.yaml.snakeyaml.composer.Composer.composeSequenceNode\n2129707 - CVE-2022-38750 snakeyaml: Uncaught exception in org.yaml.snakeyaml.constructor.BaseConstructor.constructObject\n2129709 - CVE-2022-38751 snakeyaml: Uncaught exception in java.base/java.util.regex.Pattern$Ques.match\n2129710 - CVE-2022-38752 snakeyaml: Uncaught exception in java.base/java.util.ArrayList.hashCode\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "172897" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" } ], "trust": 2.52 }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2022-0235", "trust": 3.6 }, { "db": "SIEMENS", "id": "SSA-637483", "trust": 1.1 }, { "db": "JVNDB", "id": "JVNDB-2022-003319", "trust": 0.8 }, { "db": "ICS CERT", "id": "ICSA-22-258-05", "trust": 0.1 }, { "db": "VULMON", "id": "CVE-2022-0235", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166812", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168657", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168150", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167622", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "171839", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "172897", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167679", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167459", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "169935", "trust": 0.1 } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "172897" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "id": "VAR-202201-0349", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VARIoT devices database", "id": null } ], "trust": 0.20766129 }, "last_update_date": "2024-07-23T21:17:54.278000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "SSA-637483", "trust": 0.8, "url": "https://lists.debian.org/debian-lts-announce/2022/12/msg00007.html" }, { "title": "Red Hat: Moderate: nodejs:14 security, bug fix, and enhancement update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230050 - security advisory" }, { "title": "Red Hat: CVE-2022-0235", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_cve_database\u0026qid=cve-2022-0235" }, { "title": "Red Hat: Moderate: rh-nodejs14-nodejs and rh-nodejs14-nodejs-nodemon security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20230612 - security advisory" }, { "title": "Red Hat: Important: Red Hat Data Grid 8.4.0 security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228524 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat OpenShift Service Mesh 2.1.2.1 containers security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221739 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.10 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221715 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.4 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221681 - security advisory" }, { "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.4.2 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20220735 - security advisory" }, { "title": "Red Hat: Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, \u0026 bugfix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226156 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.8 security and container updates", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221083 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.3 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221476 - security advisory" }, { "title": "IBM: Security Bulletin: IBM QRadar Assistant app for IBM QRadar SIEM includes components with multiple known vulnerabilities", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=0c5e20c044e4005143b2303b28407553" }, { "title": "IBM: Security Bulletin: Multiple security vulnerabilities are addressed with IBM Business Automation Manager Open Editions 8.0.1", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=ibm_psirt_blog\u0026qid=ac267c598ae2a2882a98ed5463cc028d" }, { "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.2 security and bug fix update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225483 - security advisory" }, { "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20224956 - security advisory" }, { "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.11 security updates and bug fixes", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225392 - security advisory" }, { "title": "Red Hat: Important: OpenShift Container Platform 4.11.0 bug fix and security update", "trust": 0.1, "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225069 - security advisory" }, { "title": "npcheck", "trust": 0.1, "url": "https://github.com/nodeshift/npcheck " }, { "title": "", "trust": 0.1, "url": "https://github.com/live-hack-cve/cve-2022-0235 " } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-200", "trust": 1.0 }, { "problemtype": "Open redirect (CWE-601) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.4, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0235" }, { "trust": 1.1, "url": "https://huntr.dev/bounties/d26ab655-38d6-48b3-be15-f9ad6b6ae6f7" }, { "trust": 1.1, "url": "https://github.com/node-fetch/node-fetch/commit/36e47e8a6406185921e4985dcbeff140d73eaa10" }, { "trust": 1.1, "url": "https://cert-portal.siemens.com/productcert/pdf/ssa-637483.pdf" }, { "trust": 1.1, "url": "https://lists.debian.org/debian-lts-announce/2022/12/msg00007.html" }, { "trust": 0.8, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.8, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.8, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.8, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2022-0536" }, { "trust": 0.5, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0536" }, { "trust": 0.3, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-24785" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3752" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4157" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3744" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-13974" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-45485" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3773" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4002" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43976" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-0941" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43389" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-44733" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4037" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-29154" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-37159" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3772" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-0404" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3669" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3764" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-20322" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43056" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-41864" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4197" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3612" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-26401" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-27820" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3743" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1011" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4083" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-45486" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0322" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-4788" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0286" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0001" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3759" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-21781" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0002" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4203" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-42739" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0778" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-27191" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23852" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-23566" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0492" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24778" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41190" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23566" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24450" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-43565" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24773" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24771" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23647" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23647" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24772" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-25857" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25857" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-31129" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-29526" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21803" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-19131" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131" }, { "trust": 0.1, "url": "https://cwe.mitre.org/data/definitions/200.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:0050" }, { "trust": 0.1, "url": "https://www.cisa.gov/uscert/ics/advisories/icsa-22-258-05" }, { "trust": 0.1, "url": "https://www.ibm.com/blogs/psirt/security-bulletin-ibm-qradar-assistant-app-for-ibm-qradar-siem-includes-components-with-multiple-known-vulnerabilities/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0413" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25236" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-31566" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-22822" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22827" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0392" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23308" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0330" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0516" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0330" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0392" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0261" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0920" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/index" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0778" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22942" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-31566" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0811" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-45960" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-46143" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0361" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0847" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23177" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0261" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0155" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22826" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22825" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0318" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-0920" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0359" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0155" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-46143" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0359" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0413" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0435" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-4154" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22822" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:1476" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23177" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45960" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0144" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0318" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22823" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0361" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25315" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0811" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43565" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23218" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0847" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25235" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0144" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6835" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25647" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37136" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21724" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24771" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-41269" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25858" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37136" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26520" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25647" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-22569" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-37734" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0981" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41269" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21724" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-37137" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0981" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-22569" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2097" }, { "trust": 0.1, "url": "https://access.redhat.com//documentation/en-us/red_hat_openshift_data_foundation/4.11/html/4.11_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28327" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25314" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-29824" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2068" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1292" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27782" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24921" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27776" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21698" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1292" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-22576" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2068" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2097" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25313" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1586" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-27774" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-23440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0670" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1785" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1785" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-23440" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-40528" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1897" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1650" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1927" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24675" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1650" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:6156" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23772" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1708" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3696" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-38185" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28733" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28736" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3697" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28734" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28737" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3695" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-28735" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5392" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-23918" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-35065" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44531" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-35065" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-3517" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-43548" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24999" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24999" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38900" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-4904" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44906" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44533" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44533" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2023-23920" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-35256" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2023:1742" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44532" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-25881" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-35256" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-44531" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21824" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-4904" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25881" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38900" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-43548" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44532" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-6158-1" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/node-fetch/1.7.3-2ubuntu0.1" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1154" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5483" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35492" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3752" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3772" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3773" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43858" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3743" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3764" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37159" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4157" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43816" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3759" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4037" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4002" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3744" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:4956" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3918" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-24823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38749" }, { "trust": 0.1, "url": "https://access.redhat.com/jbossnetwork/restricted/softwaredetail.html?softwareid=70381\u0026product=data.grid\u0026version=8.4\u0026downloadtype=patches" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:8524" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38750" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38749" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38752" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_data_grid/8.4/html-single/red_hat_data_grid_8.4_release_notes/index" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38752" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38751" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24823" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-38750" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-38751" } ], "sources": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "172897" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULMON", "id": "CVE-2022-0235" }, { "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "db": "PACKETSTORM", "id": "166812" }, { "db": "PACKETSTORM", "id": "168657" }, { "db": "PACKETSTORM", "id": "168150" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "171839" }, { "db": "PACKETSTORM", "id": "172897" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "167459" }, { "db": "PACKETSTORM", "id": "169935" }, { "db": "NVD", "id": "CVE-2022-0235" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-01-16T00:00:00", "db": "VULMON", "id": "CVE-2022-0235" }, { "date": "2023-02-14T00:00:00", "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "date": "2022-04-21T15:12:25", "db": "PACKETSTORM", "id": "166812" }, { "date": "2022-10-07T15:02:16", "db": "PACKETSTORM", "id": "168657" }, { "date": "2022-08-25T15:22:18", "db": "PACKETSTORM", "id": "168150" }, { "date": "2022-06-29T20:27:02", "db": "PACKETSTORM", "id": "167622" }, { "date": "2023-04-12T16:57:08", "db": "PACKETSTORM", "id": "171839" }, { "date": "2023-06-13T21:27:37", "db": "PACKETSTORM", "id": "172897" }, { "date": "2022-07-01T15:04:32", "db": "PACKETSTORM", "id": "167679" }, { "date": "2022-06-09T16:11:52", "db": "PACKETSTORM", "id": "167459" }, { "date": "2022-11-18T14:27:39", "db": "PACKETSTORM", "id": "169935" }, { "date": "2022-01-16T17:15:07.870000", "db": "NVD", "id": "CVE-2022-0235" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-03T00:00:00", "db": "VULMON", "id": "CVE-2022-0235" }, { "date": "2023-02-14T04:12:00", "db": "JVNDB", "id": "JVNDB-2022-003319" }, { "date": "2023-02-03T19:16:07.090000", "db": "NVD", "id": "CVE-2022-0235" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "remote", "sources": [ { "db": "PACKETSTORM", "id": "172897" } ], "trust": 0.1 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "node-fetch\u00a0 Open redirect vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2022-003319" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "code execution, xss", "sources": [ { "db": "PACKETSTORM", "id": "168657" } ], "trust": 0.1 } }
Sightings
Author | Source | Type | Date |
---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.