var-202112-2255
Vulnerability from variot

In the IPv6 implementation in the Linux kernel before 5.13.3, net/ipv6/output_core.c has an information leak because of certain use of a hash table which, although big, doesn't properly consider that IPv6-based attackers can typically choose among many IPv6 source addresses. Linux Kernel Exists in the use of cryptographic algorithms.Information may be obtained. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256

===================================================================== Red Hat Security Advisory

Synopsis: Important: kernel security, bug fix, and enhancement update Advisory ID: RHSA-2022:1988-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:1988 Issue date: 2022-05-10 CVE Names: CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 CVE-2021-3669 CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 CVE-2021-21781 CVE-2021-26401 CVE-2021-29154 CVE-2021-37159 CVE-2021-41864 CVE-2021-42739 CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 CVE-2022-0322 CVE-2022-1011 =====================================================================

  1. Summary:

An update for kernel is now available for Red Hat Enterprise Linux 8.

Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

  1. Relevant releases/architectures:

Red Hat CodeReady Linux Builder (v. 8) - aarch64, ppc64le, x86_64 Red Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64

Security Fix(es):

  • kernel: fget: check that the fd still exists after getting a ref to it (CVE-2021-4083)

  • kernel: avoid cyclic entity chains due to malformed USB descriptors (CVE-2020-0404)

  • kernel: speculation on incompletely validated data on IBM Power9 (CVE-2020-4788)

  • kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c (CVE-2020-13974)

  • kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a use-after-free (CVE-2021-0941)

  • kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP() (CVE-2021-3612)

  • kernel: reading /proc/sysvipc/shm does not scale with large shared memory segment counts (CVE-2021-3669)

  • kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c (CVE-2021-3743)

  • kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd() (CVE-2021-3744)

  • kernel: possible use-after-free in bluetooth module (CVE-2021-3752)

  • kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg limits and DoS attacks (CVE-2021-3759)

  • kernel: DoS in ccp_run_aes_gcm_cmd() function (CVE-2021-3764)

  • kernel: sctp: Invalid chunks may be used to remotely remove existing associations (CVE-2021-3772)

  • kernel: lack of port sanity checking in natd and netfilter leads to exploit of OpenVPN clients (CVE-2021-3773)

  • kernel: possible leak or coruption of data residing on hugetlbfs (CVE-2021-4002)

  • kernel: security regression for CVE-2018-13405 (CVE-2021-4037)

  • kernel: Buffer overwrite in decode_nfs_fh function (CVE-2021-4157)

  • kernel: cgroup: Use open-time creds and namespace for migration perm checks (CVE-2021-4197)

  • kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses (CVE-2021-4203)

  • kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed packets replies (CVE-2021-20322)

  • kernel: arm: SIGPAGE information disclosure vulnerability (CVE-2021-21781)

  • hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715 (CVE-2021-26401)

  • kernel: Local privilege escalation due to incorrect BPF JIT branch displacement computation (CVE-2021-29154)

  • kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c (CVE-2021-37159)

  • kernel: eBPF multiplication integer overflow in prealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to out-of-bounds write (CVE-2021-41864)

  • kernel: Heap buffer overflow in firedtv driver (CVE-2021-42739)

  • kernel: ppc: kvm: allows a malicious KVM guest to crash the host (CVE-2021-43056)

  • kernel: an array-index-out-bounds in detach_capi_ctr in drivers/isdn/capi/kcapi.c (CVE-2021-43389)

  • kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c allows an attacker to cause DoS via crafted USB device (CVE-2021-43976)

  • kernel: use-after-free in the TEE subsystem (CVE-2021-44733)

  • kernel: information leak in the IPv6 implementation (CVE-2021-45485)

  • kernel: information leak in the IPv4 implementation (CVE-2021-45486)

  • hw: cpu: intel: Branch History Injection (BHI) (CVE-2022-0001)

  • hw: cpu: intel: Intra-Mode BTI (CVE-2022-0002)

  • kernel: Local denial of service in bond_ipsec_add_sa (CVE-2022-0286)

  • kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c (CVE-2022-0322)

  • kernel: FUSE allows UAF reads of write() buffers, allowing theft of (partial) /etc/shadow hashes (CVE-2022-1011)

  • kernel: use-after-free in nouveau kernel module (CVE-2020-27820)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

For detailed information on changes in this release, see the Red Hat Enterprise Linux 8.6 Release Notes linked from the References section.

  1. Solution:

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

The system must be rebooted for this update to take effect.

  1. Bugs fixed (https://bugzilla.redhat.com/):

1888433 - CVE-2020-4788 kernel: speculation on incompletely validated data on IBM Power9 1901726 - CVE-2020-27820 kernel: use-after-free in nouveau kernel module 1919791 - CVE-2020-0404 kernel: avoid cyclic entity chains due to malformed USB descriptors 1946684 - CVE-2021-29154 kernel: Local privilege escalation due to incorrect BPF JIT branch displacement computation 1951739 - CVE-2021-42739 kernel: Heap buffer overflow in firedtv driver 1957375 - [RFE] x86, tsc: Add kcmdline args for skipping tsc calibration sequences 1974079 - CVE-2021-3612 kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP() 1981950 - CVE-2021-21781 kernel: arm: SIGPAGE information disclosure vulnerability 1983894 - Hostnetwork pod to service backed by hostnetwork on the same node is not working with OVN Kubernetes 1985353 - CVE-2021-37159 kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c 1986473 - CVE-2021-3669 kernel: reading /proc/sysvipc/shm does not scale with large shared memory segment counts 1994390 - FIPS: deadlock between PID 1 and "modprobe crypto-jitterentropy_rng" at boot, preventing system to boot 1997338 - block: update to upstream v5.14 1997467 - CVE-2021-3764 kernel: DoS in ccp_run_aes_gcm_cmd() function 1997961 - CVE-2021-3743 kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c 1999544 - CVE-2021-3752 kernel: possible use-after-free in bluetooth module 1999675 - CVE-2021-3759 kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg limits and DoS attacks 2000627 - CVE-2021-3744 kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd() 2000694 - CVE-2021-3772 kernel: sctp: Invalid chunks may be used to remotely remove existing associations 2004949 - CVE-2021-3773 kernel: lack of port sanity checking in natd and netfilter leads to exploit of OpenVPN clients 2009312 - Incorrect system time reported by the cpu guest statistics (PPC only). 2009521 - XFS: sync to upstream v5.11 2010463 - CVE-2021-41864 kernel: eBPF multiplication integer overflow in prealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to out-of-bounds write 2011104 - statfs reports wrong free space for small quotas 2013180 - CVE-2021-43389 kernel: an array-index-out-bounds in detach_capi_ctr in drivers/isdn/capi/kcapi.c 2014230 - CVE-2021-20322 kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed packets replies 2015525 - SCTP peel-off with SELinux and containers in OCP 2015755 - zram: zram leak with warning when running zram02.sh in ltp 2016169 - CVE-2020-13974 kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c 2017073 - CVE-2021-43056 kernel: ppc: kvm: allows a malicious KVM guest to crash the host 2017796 - ceph omnibus backport for RHEL-8.6.0 2018205 - CVE-2021-0941 kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a use-after-free 2022814 - Rebase the input and HID stack in 8.6 to v5.15 2025003 - CVE-2021-43976 kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c allows an attacker to cause DoS via crafted USB device 2025726 - CVE-2021-4002 kernel: possible leak or coruption of data residing on hugetlbfs 2027239 - CVE-2021-4037 kernel: security regression for CVE-2018-13405 2029923 - CVE-2021-4083 kernel: fget: check that the fd still exists after getting a ref to it 2030476 - Kernel 4.18.0-348.2.1 secpath_cache memory leak involving strongswan tunnel 2030747 - CVE-2021-44733 kernel: use-after-free in the TEE subsystem 2031200 - rename(2) fails on subfolder mounts when the share path has a trailing slash 2034342 - CVE-2021-4157 kernel: Buffer overwrite in decode_nfs_fh function 2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks 2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses 2037019 - CVE-2022-0286 kernel: Local denial of service in bond_ipsec_add_sa 2039911 - CVE-2021-45485 kernel: information leak in the IPv6 implementation 2039914 - CVE-2021-45486 kernel: information leak in the IPv4 implementation 2042798 - [RHEL8.6][sfc] General sfc driver update 2042822 - CVE-2022-0322 kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c 2043453 - [RHEL8.6 wireless] stack & drivers general update to v5.16+ 2046021 - kernel 4.18.0-358.el8 async dirops causes write errors with namespace restricted caps 2048251 - Selinux is not allowing SCTP connection setup between inter pod communication in enforcing mode 2061700 - CVE-2021-26401 hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715 2061712 - CVE-2022-0001 hw: cpu: intel: Branch History Injection (BHI) 2061721 - CVE-2022-0002 hw: cpu: intel: Intra-Mode BTI 2064855 - CVE-2022-1011 kernel: FUSE allows UAF reads of write() buffers, allowing theft of (partial) /etc/shadow hashes

  1. Package List:

Red Hat Enterprise Linux BaseOS (v. 8):

Source: kernel-4.18.0-372.9.1.el8.src.rpm

aarch64: bpftool-4.18.0-372.9.1.el8.aarch64.rpm bpftool-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-4.18.0-372.9.1.el8.aarch64.rpm kernel-core-4.18.0-372.9.1.el8.aarch64.rpm kernel-cross-headers-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-core-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-devel-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-modules-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-modules-extra-4.18.0-372.9.1.el8.aarch64.rpm kernel-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-372.9.1.el8.aarch64.rpm kernel-devel-4.18.0-372.9.1.el8.aarch64.rpm kernel-headers-4.18.0-372.9.1.el8.aarch64.rpm kernel-modules-4.18.0-372.9.1.el8.aarch64.rpm kernel-modules-extra-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-libs-4.18.0-372.9.1.el8.aarch64.rpm perf-4.18.0-372.9.1.el8.aarch64.rpm perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm python3-perf-4.18.0-372.9.1.el8.aarch64.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm

noarch: kernel-abi-stablelists-4.18.0-372.9.1.el8.noarch.rpm kernel-doc-4.18.0-372.9.1.el8.noarch.rpm

ppc64le: bpftool-4.18.0-372.9.1.el8.ppc64le.rpm bpftool-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-4.18.0-372.9.1.el8.ppc64le.rpm kernel-core-4.18.0-372.9.1.el8.ppc64le.rpm kernel-cross-headers-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-core-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-devel-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-modules-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-modules-extra-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-372.9.1.el8.ppc64le.rpm kernel-devel-4.18.0-372.9.1.el8.ppc64le.rpm kernel-headers-4.18.0-372.9.1.el8.ppc64le.rpm kernel-modules-4.18.0-372.9.1.el8.ppc64le.rpm kernel-modules-extra-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-libs-4.18.0-372.9.1.el8.ppc64le.rpm perf-4.18.0-372.9.1.el8.ppc64le.rpm perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm python3-perf-4.18.0-372.9.1.el8.ppc64le.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm

s390x: bpftool-4.18.0-372.9.1.el8.s390x.rpm bpftool-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-4.18.0-372.9.1.el8.s390x.rpm kernel-core-4.18.0-372.9.1.el8.s390x.rpm kernel-cross-headers-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-core-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-devel-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-modules-4.18.0-372.9.1.el8.s390x.rpm kernel-debug-modules-extra-4.18.0-372.9.1.el8.s390x.rpm kernel-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-debuginfo-common-s390x-4.18.0-372.9.1.el8.s390x.rpm kernel-devel-4.18.0-372.9.1.el8.s390x.rpm kernel-headers-4.18.0-372.9.1.el8.s390x.rpm kernel-modules-4.18.0-372.9.1.el8.s390x.rpm kernel-modules-extra-4.18.0-372.9.1.el8.s390x.rpm kernel-tools-4.18.0-372.9.1.el8.s390x.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-core-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-debuginfo-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-devel-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-modules-4.18.0-372.9.1.el8.s390x.rpm kernel-zfcpdump-modules-extra-4.18.0-372.9.1.el8.s390x.rpm perf-4.18.0-372.9.1.el8.s390x.rpm perf-debuginfo-4.18.0-372.9.1.el8.s390x.rpm python3-perf-4.18.0-372.9.1.el8.s390x.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.s390x.rpm

x86_64: bpftool-4.18.0-372.9.1.el8.x86_64.rpm bpftool-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-4.18.0-372.9.1.el8.x86_64.rpm kernel-core-4.18.0-372.9.1.el8.x86_64.rpm kernel-cross-headers-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-core-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-devel-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-modules-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-modules-extra-4.18.0-372.9.1.el8.x86_64.rpm kernel-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-372.9.1.el8.x86_64.rpm kernel-devel-4.18.0-372.9.1.el8.x86_64.rpm kernel-headers-4.18.0-372.9.1.el8.x86_64.rpm kernel-modules-4.18.0-372.9.1.el8.x86_64.rpm kernel-modules-extra-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-libs-4.18.0-372.9.1.el8.x86_64.rpm perf-4.18.0-372.9.1.el8.x86_64.rpm perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm python3-perf-4.18.0-372.9.1.el8.x86_64.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm

Red Hat CodeReady Linux Builder (v. 8):

aarch64: bpftool-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm kernel-tools-libs-devel-4.18.0-372.9.1.el8.aarch64.rpm perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm

ppc64le: bpftool-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm kernel-tools-libs-devel-4.18.0-372.9.1.el8.ppc64le.rpm perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm

x86_64: bpftool-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debug-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm kernel-tools-libs-devel-4.18.0-372.9.1.el8.x86_64.rpm perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm python3-perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm

These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/

  1. References:

https://access.redhat.com/security/cve/CVE-2020-0404 https://access.redhat.com/security/cve/CVE-2020-4788 https://access.redhat.com/security/cve/CVE-2020-13974 https://access.redhat.com/security/cve/CVE-2020-27820 https://access.redhat.com/security/cve/CVE-2021-0941 https://access.redhat.com/security/cve/CVE-2021-3612 https://access.redhat.com/security/cve/CVE-2021-3669 https://access.redhat.com/security/cve/CVE-2021-3743 https://access.redhat.com/security/cve/CVE-2021-3744 https://access.redhat.com/security/cve/CVE-2021-3752 https://access.redhat.com/security/cve/CVE-2021-3759 https://access.redhat.com/security/cve/CVE-2021-3764 https://access.redhat.com/security/cve/CVE-2021-3772 https://access.redhat.com/security/cve/CVE-2021-3773 https://access.redhat.com/security/cve/CVE-2021-4002 https://access.redhat.com/security/cve/CVE-2021-4037 https://access.redhat.com/security/cve/CVE-2021-4083 https://access.redhat.com/security/cve/CVE-2021-4157 https://access.redhat.com/security/cve/CVE-2021-4197 https://access.redhat.com/security/cve/CVE-2021-4203 https://access.redhat.com/security/cve/CVE-2021-20322 https://access.redhat.com/security/cve/CVE-2021-21781 https://access.redhat.com/security/cve/CVE-2021-26401 https://access.redhat.com/security/cve/CVE-2021-29154 https://access.redhat.com/security/cve/CVE-2021-37159 https://access.redhat.com/security/cve/CVE-2021-41864 https://access.redhat.com/security/cve/CVE-2021-42739 https://access.redhat.com/security/cve/CVE-2021-43056 https://access.redhat.com/security/cve/CVE-2021-43389 https://access.redhat.com/security/cve/CVE-2021-43976 https://access.redhat.com/security/cve/CVE-2021-44733 https://access.redhat.com/security/cve/CVE-2021-45485 https://access.redhat.com/security/cve/CVE-2021-45486 https://access.redhat.com/security/cve/CVE-2022-0001 https://access.redhat.com/security/cve/CVE-2022-0002 https://access.redhat.com/security/cve/CVE-2022-0286 https://access.redhat.com/security/cve/CVE-2022-0322 https://access.redhat.com/security/cve/CVE-2022-1011 https://access.redhat.com/security/updates/classification/#important https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/

  1. Contact:

The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/

Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1

iQIVAwUBYnqSF9zjgjWX9erEAQjBXQ/8DSpFUMNN6ZVFtli2KuVowVLS+14J0jtj 0zxpr0skJT8vVulU3VTeURBMdg9NAo9bj3R5KTk2+dC+AMuHET5aoVvaYmimBGKL 5qzpu7q9Z0aaD2I288suHCnYuRJnt+qKZtNa4hlcY92bN0tcYBonxsdIS2xM6xIu GHNS8HNVUNz4PuCBfmbITvgX9Qx+iZQVlVccDBG5LDpVwgOtnrxHKbe5E499v/9M oVoN+eV9ulHAZdCHWlUAahbsvEqDraCKNT0nHq/xO5dprPjAcjeKYMeaICtblRr8 k+IouGywaN+mW4sBjnaaiuw2eAtoXq/wHisX1iUdNkroqcx9NBshWMDBJnE4sxQJ ZOSc8B6yjJItPvUI7eD3BDgoka/mdoyXTrg+9VRrir6vfDHPrFySLDrO1O5HM5fO 3sExCVO2VM7QMCGHJ1zXXX4szk4SV/PRsjEesvHOyR2xTKZZWMsXe1h9gYslbADd tW0yco/G23xjxqOtMKuM/nShBChflMy9apssldiOfdqODJMv5d4rRpt0xgmtSOM6 qReveuQCasmNrGlAHgDwbtWz01fmSuk9eYDhZNmHA3gxhoHIV/y+wr0CLbOQtDxT p79nhiqwUo5VMj/X30Lu0Wl3ptLuhRWamzTCkEEzdubr8aVsT4RRNQU3KfVFfpT1 MWp/2ui3i80= =Fdgy -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . 8) - x86_64

  1. Description:

The kernel-rt packages provide the Real Time Linux Kernel, which enables fine-tuning for systems with extremely high determinism requirements. Summary:

The Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:

The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):

2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console 2040693 - ?Replication repository? wizard has no validation for name length 2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com? 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace 2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. 2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade 2061335 - [MTC UI] ?Update cluster? button is not getting disabled 2062266 - MTC UI does not display logs properly [OADP-BL] 2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend 2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2076593 - Velero pod log missing from UI drop down 2076599 - Velero pod log missing from downloaded logs folder [OADP-BL] 2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan 2079252 - [MTC] Rsync options logs not visible in log-reader pod 2082221 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [UI] 2082225 - non-numeric user when launching stage pods [OADP-BL] 2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments 2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods 2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels 2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL] 2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts 2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL] 2096939 - Fix legacy operator.yml inconsistencies and errors 2100486 - [MTC UI] Target storage class field is not getting respected when clusters don't have replication repo configured. Description:

Red Hat Advanced Cluster Management for Kubernetes 2.5.0 images

Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:

https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/

Security fixes:

  • nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)

  • containerd: Unprivileged pod may bind mount any privileged regular file on disk (CVE-2021-43816)

  • minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)

  • openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates (CVE-2022-0778)

  • imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path (CVE-2022-24778)

  • golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)

  • node-fetch: exposure of sensitive information to an unauthorized actor (CVE-2022-0235)

  • nconf: Prototype pollution in memory store (CVE-2022-21803)

  • golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)

  • nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)

  • Moment.js: Path traversal in moment.locale (CVE-2022-24785)

  • golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)

  • go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)

  • opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)

Bug fixes:

  • RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target (BZ# 2014557)

  • RHACM 2.5.0 images (BZ# 2024938)

  • [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?) (BZ#2028348)

  • Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly (BZ# 2028647)

  • create cluster pool -> choose infra type, As a result infra providers disappear from UI. (BZ# 2033339)

  • Restore/backup shows up as Validation failed but the restore backup status in ACM shows success (BZ# 2034279)

  • Observability - OCP 311 node role are not displayed completely (BZ# 2038650)

  • Documented uninstall procedure leaves many leftovers (BZ# 2041921)

  • infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5 (BZ# 2046554)

  • Acm failed to install due to some missing CRDs in operator (BZ# 2047463)

  • Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)

  • ACM home page now includes /home/ in url (BZ# 2051299)

  • proxy heading in Add Credential should be capitalized (BZ# 2051349)

  • ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0 (BZ# 2051983)

  • Create Policy button does not work and user cannot use console to create policy (BZ# 2053264)

  • No cluster information was displayed after a policyset was created (BZ# 2053366)

  • Dynamic plugin update does not take effect in Firefox (BZ# 2053516)

  • Replicated policy should not be available when creating a Policy Set (BZ# 2054431)

  • Placement section in Policy Set wizard does not reset when users click "Back" to re-configured placement (BZ# 2054433)

  • Bugs fixed (https://bugzilla.redhat.com/):

2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target 2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability 2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion 2028224 - RHACM 2.5.0 images 2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?) 2028647 - Clusters are in 'Degraded' status with upgrade env due to obs-controller not working properly 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2033339 - create cluster pool -> choose infra type , As a result infra providers disappear from UI. 2034279 - Restore/backup shows up as Validation failed but the restore backup status in ACM shows success 2036252 - CVE-2021-43858 minio: user privilege escalation in AddUser() admin API 2038650 - Observability - OCP 311 node role are not displayed completely 2041921 - Documented uninstall procedure leaves many leftovers 2044434 - CVE-2021-43816 containerd: Unprivileged pod may bind mount any privileged regular file on disk 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2046554 - infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5 2047463 - Acm failed to install due to some missing CRDs in operator 2051298 - Navigation icons no longer showing in ACM 2.5 2051299 - ACM home page now includes /home/ in url 2051349 - proxy heading in Add Credential should be capitalized 2051983 - ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0 2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account 2053264 - Create Policy button does not work and user cannot use console to create policy 2053366 - No cluster information was displayed after a policyset was created 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements 2053516 - Dynamic plugin update does not take effect in Firefox 2054431 - Replicated policy should not be available when creating a Policy Set 2054433 - Placement section in Policy Set wizard does not reset when users click "Back" to re-configured placement 2054772 - credentialName is not parsed correctly in UI notifications/alerts when creating/updating a discovery config 2054860 - Cluster overview page crashes for on-prem cluster 2055333 - Unable to delete assisted-service operator 2055900 - If MCH is installed on existing MCE and both are in multicluster-engine namespace , uninstalling MCH terminates multicluster-engine namespace 2056485 - [UI] In infraenv detail the host list don't have pagination 2056701 - Non platform install fails agentclusterinstall CRD is outdated in rhacm2.5 2057060 - [CAPI] Unable to create ClusterDeployment due to service account restrictions (ACM + Bundled Assisted) 2058435 - Label cluster.open-cluster-management.io/backup-cluster stamped 'unknown' for velero backups 2059779 - spec.nodeSelector is missing in MCE instance created by MCH upon installing ACM on infra nodes 2059781 - Policy UI crashes when viewing details of configuration policies for backupschedule that does not exist 2060135 - [assisted-install] agentServiceConfig left orphaned after uninstalling ACM 2060151 - Policy set of the same name cannot be re-created after the previous one has been deleted 2060230 - [UI] Delete host modal has incorrect host's name populated 2060309 - multiclusterhub stuck in installing on "ManagedClusterConditionAvailable" [intermittent] 2060469 - The development branch of the Submariner addon deploys 0.11.0, not 0.12.0 2060550 - MCE installation hang due to no console-mce-console deployment available 2060603 - prometheus doesn't display managed clusters 2060831 - Observability - prometheus-operator failed to start on KS 2060934 - Cannot provision AWS OCP 4.9 cluster from Power Hub 2061260 - The value of the policyset placement should be filtered space when input cluster label expression 2061311 - Cleanup of installed spoke clusters hang on deletion of spoke namespace 2061659 - the network section in create cluster -> Networking include the brace in the network title 2061798 - [ACM 2.5] The service of Cluster Proxy addon was missing 2061838 - ACM component subscriptions are removed when enabling spec.disableHubSelfManagement in MCH 2062009 - No name validation is performed on Policy and Policy Set Wizards 2062022 - cluster.open-cluster-management.io/backup-cluster of velero schedules should populate the corresponding hub clusterID 2062025 - No validation is done on yaml's format or content in Policy and Policy Set wizards 2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates 2062337 - velero schedules get re-created after the backupschedule is in 'BackupCollision' phase 2062462 - Upgrade to 2.5 hang due to irreconcilable errors of grc-sub and search-prod-sub in MCH 2062556 - Always return the policyset page after created the policy from UI 2062787 - Submariner Add-on UI does not indicate on Broker error 2063055 - User with cluserrolebinding of open-cluster-management:cluster-manager-admin role can't see policies and clusters page 2063341 - Release imagesets are missing in the console for ocp 4.10 2063345 - Application Lifecycle- UI shows white blank page when the page is Refreshed 2063596 - claim clusters from clusterpool throws errors 2063599 - Update the message in clusterset -> clusterpool page since we did not allow to add clusterpool to clusterset by resourceassignment 2063697 - Observability - MCOCR reports object-storage secret without AWS access_key in STS enabled env 2064231 - Can not clean the instance type for worker pool when create the clusters 2064247 - prefer UI can add the architecture type when create the cluster 2064392 - multicloud oauth-proxy failed to log users in on web 2064477 - Click at "Edit Policy" for each policy leads to a blank page 2064509 - No option to view the ansible job details and its history in the Automation wizard after creation of the automation job 2064516 - Unable to delete an automation job of a policy 2064528 - Columns of Policy Set, Status and Source on Policy page are not sortable 2064535 - Different messages on the empty pages of Overview and Clusters when policy is disabled 2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server 2064722 - [Tracker] [DR][ACM 2.5] Applications are not getting deployed on managed cluster 2064899 - Failed to provision openshift 4.10 on bare metal 2065436 - "Filter" drop-down list does not show entries of the policies that have no top-level remediation specified 2066198 - Issues about disabled policy from UI 2066207 - The new created policy should be always shown up on the first line 2066333 - The message was confuse when the cluster status is Running 2066383 - MCE install failing on proxy disconnected environment 2066433 - Logout not working for ACM 2.5 2066464 - console-mce-console pods throw ImagePullError after upgrading to ocp 4.10 2066475 - User with view-only rolebinding should not be allowed to create policy, policy set and automation job 2066544 - The search box can't work properly in Policies page 2066594 - RFE: Can't open the helm source link of the backup-restore-enabled policy from UI 2066650 - minor issues in cluster curator due to the startup throws errors 2066751 - the image repo of application-manager did not updated to use the image repo in MCE/MCH configuration 2066834 - Hibernating cluster(s) in cluster pool stuck in 'Stopping' status after restore activation 2066842 - cluster pool credentials are not backed up 2066914 - Unable to remove cluster value during configuration of the label expressions for policy and policy set 2066940 - Validation fired out for https proxy when the link provided not starting with https 2066965 - No message is displayed in Policy Wizard to indicate a policy externally managed 2066979 - MIssing groups in policy filter options comparing to previous RHACM version 2067053 - I was not able to remove the image mirror content when create the cluster 2067067 - Can't filter the cluster info when clicked the cluster in the Placement section 2067207 - Bare metal asset secrets are not backed up 2067465 - Categories,Standards, and Controls annotations are not updated after user has deleted a selected template 2067713 - Columns on policy's "Results" are not sort-able as in previous release 2067728 - Can't search in the policy creation or policyset creation Yaml editor 2068304 - Application Lifecycle- Replicasets arent showing the logs console in Topology 2068309 - For policy wizard in dynamics plugin environment, buttons at the bottom should be sticky and the contents of the Policy should scroll 2068312 - Application Lifecycle - Argo Apps are not showing overview details and topology after upgrading from 2.4 2068313 - Application Lifecycle - Refreshing overview page leads to a blank page 2068328 - A cluster's "View history" page should not contain all clusters' violations history 2068387 - Observability - observability operator always CrashLoopBackOff in FIPS upgrading hub 2068993 - Observability - Node list is not filtered according to nodeType on OCP 311 dashboard 2069329 - config-policy-controller addon with "Unknown" status in OCP 3.11 managed cluster after upgrade hub to 2.5 2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path 2069469 - Status of unreachable clusters is not reported in several places on GRC panels 2069615 - The YAML editor can't work well when login UI using dynamic console plugin 2069622 - No validation for policy template's name 2069698 - After claim a cluster from clusterpool, the cluster pages become very very slow 2069867 - Error occurs when trying to edit an application set/subscription 2069870 - ACM/MCE Dynamic Plugins - 404: Page Not Found Error Occurs - intermittent crashing 2069875 - Cluster secrets are not being created in the managed cluster's namespace 2069895 - Application Lifecycle - Replicaset and Pods gives error messages when Yaml is selected on sidebar 2070203 - Blank Application is shown when editing an Application with AnsibleJobs 2070782 - Failed Secret Propagation to the Same Namespace as the AnsibleJob CR 2070846 - [ACM 2.5] Can't re-add the default clusterset label after removing it from a managedcluster on BM SNO hub 2071066 - Policy set details panel does not work when deployed into namespace different than "default" 2071173 - Configured RunOnce automation job is not displayed although the policy has no violation 2071191 - MIssing title on details panel after clicking "view details" of a policy set card 2071769 - Placement must be always configured or error is reported when creating a policy 2071818 - ACM logo not displayed in About info modal 2071869 - Topology includes the status of local cluster resources when Application is only deployed to managed cluster 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale 2072097 - Local Cluster is shown as Remote on the Application Overview Page and Single App Overview Page 2072104 - Inconsistent "Not Deployed" Icon Used Between 2.4 and 2.5 as well as the Overview and Topology 2072177 - Cluster Resource Status is showing App Definition Statuses as well 2072227 - Sidebar Statuses Need to Be Updated to Reflect Cluster List and Cluster Resource Statuses 2072231 - Local Cluster not included in the appsubreport for Helm Applications Deployed on All Clusters 2072334 - Redirect URL is now to the details page after created a policy 2072342 - Shows "NaN%" in the ring chart when add the disabled policy into policyset and view its details 2072350 - CRD Deployed via Application Console does not have correct deployment status and spelling 2072359 - Report the error when editing compliance type in the YAML editor and then submit the changes 2072504 - The policy has violations on the failed managed cluster 2072551 - URL dropdown is not being rendered with an Argo App with a new URL 2072773 - When a channel is deleted and recreated through the App Wizard, application creation stalls and warning pops up 2072824 - The edit/delete policyset button should be greyed when using viewer check 2072829 - When Argo App with jsonnet object is deployed, topology and cluster status would fail to display the correct statuses. 2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub 2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO 2073355 - Get blank page when click policy with unknown status in Governance -> Overview page 2073508 - Thread responsible to get insights data from ks clusters is broken 2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters 2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology 2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin 2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of "resource conflict" error 2074178 - Editing Helm Argo Applications does not Prune Old Resources 2074626 - Policy placement failure during ZTP SNO scale test 2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store 2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal 2074937 - UI allows creating cluster even when there are no ClusterImageSets 2075416 - infraEnv failed to create image after restore 2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod 2075739 - The lookup function won't check the referred resource whether exist when using template policies 2076421 - Can't select existing placement for policy or policyset when editing policy or policyset 2076494 - No policyreport CR for spoke clusters generated in the disconnected env 2076502 - The policyset card doesn't show the cluster status(violation/without violation) again after deleted one policy 2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator 2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster 2077291 - Prometheus doesn't display acm_managed_cluster_info after upgrade from 2.4 to 2.5 2077304 - Create Cluster button is disabled only if other clusters exist 2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5 2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI 2077751 - Can't create a template policy from UI when the object's name is referring Golang text template syntax in this policy 2077783 - Still show violation for clusterserviceversions after enforced "Detect Image vulnerabilities " policy template and the operator is installed 2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set 2078164 - Failed to edit a policy without placement 2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement 2078373 - Disable the hyperlink of *ks node in standalone MCE environment since the search component was not exists 2078617 - Azure public credential details get pre-populated with base domain name in UI 2078952 - View pod logs in search details returns error 2078973 - Crashed pod is marked with success in Topology 2079013 - Changing existing placement rules does not change YAML file 2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM 2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on 2079494 - Hitting Enter in yaml editor caused unexpected keys "key00x:" to be created 2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5 2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page 2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted 2079615 - Edit appset placement in UI with a new placement throws error upon submitting 2079658 - Cluster Count is Incorrect in Application UI 2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower 2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters 2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080503 - vSphere network name doesn't allow entering spaces and doesn't reflect YAML changes 2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page 2080712 - Select an existing placement configuration does not work 2080776 - Unrecognized characters are displayed on policy and policy set yaml editors 2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster 2081810 - Type '-' character in Name field caused previously typed character backspaced in in the name field of policy wizard 2081829 - Application deployed on local cluster's topology is crashing after upgrade 2081938 - The deleted policy still be shown on the policyset review page when edit this policy set 2082226 - Object Storage Topology includes residue of resources after Upgrade 2082409 - Policy set details panel remains even after the policy set has been deleted 2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets 2083038 - Warning still refers to the klusterlet-addon-appmgr pod rather than the application-manager pod 2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated 2083434 - The provider-credential-controller did not support the RHV credentials type 2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace 2083870 - When editing an existing application and refreshing the Select an existing placement configuration, multiple occurrences of the placementrule gets displayed 2084034 - The status message looks messy in the policy set card, suggest one kind status one a row 2084158 - Support provisioning bm cluster where no provisioning network provided 2084622 - Local Helm application shows cluster resources as Not Deployed in Topology [Upgrade] 2085083 - Policies fail to copy to cluster namespace after ACM upgrade 2085237 - Resources referenced by a channel are not annotated with backup label 2085273 - Error querying for ansible job in app topology 2085281 - Template name error is reported but the template name was found in a different replicated policy 2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page 2087515 - Validation thrown out in configuration for disconnect install while creating bm credential 2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade] 2088511 - Some cluster resources are not showing labels that are defined in the YAML

  1. Description:

Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.

This advisory contains the container images for Red Hat OpenShift Container Platform 4.8.53. See the following advisory for the RPM packages for this release:

https://access.redhat.com/errata/RHBA-2022:7873

Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html

Security Fix(es):

  • go-getter: command injection vulnerability (CVE-2022-26945)
  • go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)
  • go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)
  • go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section. Solution:

For OpenShift Container Platform 4.8 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:

https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html

You may download the oc tool and use it to inspect release image metadata for x86_64, s390x, and ppc64le architectures. The image digests may be found at https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags

The sha values for the release are:

(For x86_64 architecture) The image digest is sha256:ac2bbfa7036c64bbdb44f9a74df3dbafcff1b851d812bf2a48c4fabcac3c7a53

(For s390x architecture) The image digest is sha256:ac2c74a664257cea299126d4f789cdf9a5a4efc4a4e8c2361b943374d4eb21e4

(For ppc64le architecture) The image digest is sha256:53adc42ed30ad39d7117837dbf5a6db6943a8f0b3b61bc0d046b83394f5c28b2

All OpenShift Container Platform 4.8 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.8/updating/updating-cluster-cli.html

  1. Bugs fixed (https://bugzilla.redhat.com/):

2077100 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204 2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3) 2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3) 2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3) 2092928 - CVE-2022-26945 go-getter: command injection vulnerability

  1. JIRA issues fixed (https://issues.jboss.org/):

OCPBUGS-2205 - Prefer local dns does not work expectedly on OCPv4.8 OCPBUGS-2347 - [cluster-api-provider-baremetal] fix 4.8 build OCPBUGS-2577 - [4.8] ETCD Operator goes degraded when a second internal node ip is added OCPBUGS-2773 - e2e tests: Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name OCPBUGS-2989 - [4.8] cri-o should report the stage of container and pod creation it's stuck at

6

Show details on source website


{
  "@context": {
    "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#",
    "affected_products": {
      "@id": "https://www.variotdbs.pl/ref/affected_products"
    },
    "configurations": {
      "@id": "https://www.variotdbs.pl/ref/configurations"
    },
    "credits": {
      "@id": "https://www.variotdbs.pl/ref/credits"
    },
    "cvss": {
      "@id": "https://www.variotdbs.pl/ref/cvss/"
    },
    "description": {
      "@id": "https://www.variotdbs.pl/ref/description/"
    },
    "exploit_availability": {
      "@id": "https://www.variotdbs.pl/ref/exploit_availability/"
    },
    "external_ids": {
      "@id": "https://www.variotdbs.pl/ref/external_ids/"
    },
    "iot": {
      "@id": "https://www.variotdbs.pl/ref/iot/"
    },
    "iot_taxonomy": {
      "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/"
    },
    "patch": {
      "@id": "https://www.variotdbs.pl/ref/patch/"
    },
    "problemtype_data": {
      "@id": "https://www.variotdbs.pl/ref/problemtype_data/"
    },
    "references": {
      "@id": "https://www.variotdbs.pl/ref/references/"
    },
    "sources": {
      "@id": "https://www.variotdbs.pl/ref/sources/"
    },
    "sources_release_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_release_date/"
    },
    "sources_update_date": {
      "@id": "https://www.variotdbs.pl/ref/sources_update_date/"
    },
    "threat_type": {
      "@id": "https://www.variotdbs.pl/ref/threat_type/"
    },
    "title": {
      "@id": "https://www.variotdbs.pl/ref/title/"
    },
    "type": {
      "@id": "https://www.variotdbs.pl/ref/type/"
    }
  },
  "@id": "https://www.variotdbs.pl/vuln/VAR-202112-2255",
  "affected_products": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/affected_products#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "model": "all flash fabric-attached storage 8700",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "e-series santricity os controller",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h700s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "hci compute node",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h700e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": "lt",
        "trust": 1.0,
        "vendor": "linux",
        "version": "5.13.3"
      },
      {
        "model": "h500e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h300e",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h615c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h610c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "communications cloud native core policy",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "22.2.0"
      },
      {
        "model": "brocade fabric operating system",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fabric-attached storage 8300",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "communications cloud native core network exposure function",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "22.1.1"
      },
      {
        "model": "fabric-attached storage a400",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h300s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fabric-attached storage 8700",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "all flash fabric-attached storage 8300",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "solidfire \\\u0026 hci management node",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "solidfire\\, enterprise sds \\\u0026 hci storage node",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "communications cloud native core binding support function",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "oracle",
        "version": "22.1.3"
      },
      {
        "model": "h610s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h500s",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "aff a400",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "h410c",
        "scope": "eq",
        "trust": 1.0,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "e-series santricity os controller software",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "hci baseboard management controller h300e",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "kernel",
        "scope": null,
        "trust": 0.8,
        "vendor": "linux",
        "version": null
      },
      {
        "model": "fas/aff baseboard management controller a400",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fas/aff baseboard management controller 8700",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "hci baseboard management controller h410c",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "solidfire enterprise sds \u0026 hci storage node",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "solidfire \u0026 hci management node",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "hci baseboard management controller h300s",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      },
      {
        "model": "fas/aff baseboard management controller 8300",
        "scope": null,
        "trust": 0.8,
        "vendor": "netapp",
        "version": null
      }
    ],
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-45485"
      }
    ]
  },
  "configurations": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/configurations#",
      "children": {
        "@container": "@list"
      },
      "cpe_match": {
        "@container": "@list"
      },
      "data": {
        "@container": "@list"
      },
      "nodes": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "CVE_data_version": "4.0",
        "nodes": [
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*",
                "cpe_name": [],
                "versionEndExcluding": "5.13.3",
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:netapp:e-series_santricity_os_controller:-:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:netapp:solidfire_\\\u0026_hci_management_node:-:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:o:netapp:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:netapp:solidfire\\,_enterprise_sds_\\\u0026_hci_storage_node:-:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [],
            "cpe_match": [
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.1.3:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_policy:22.2.0:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              },
              {
                "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_network_exposure_function:22.1.1:*:*:*:*:*:*:*",
                "cpe_name": [],
                "vulnerable": true
              }
            ],
            "operator": "OR"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:all_flash_fabric-attached_storage_8300_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:all_flash_fabric-attached_storage_8300:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:fabric-attached_storage_8300_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:fabric-attached_storage_8300:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:all_flash_fabric-attached_storage_8700_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:all_flash_fabric-attached_storage_8700:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:fabric-attached_storage_8700_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:fabric-attached_storage_8700:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:aff_a400_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:aff_a400:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:fabric-attached_storage_a400_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:fabric-attached_storage_a400:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:hci_compute_node_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:hci_compute_node:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h300e_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h300e:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h500e_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h500e:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h610c_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h610c:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h610s_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h610s:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h615c_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h615c:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h700e_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h700e:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          },
          {
            "children": [
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": true
                  }
                ],
                "operator": "OR"
              },
              {
                "children": [],
                "cpe_match": [
                  {
                    "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*",
                    "cpe_name": [],
                    "vulnerable": false
                  }
                ],
                "operator": "OR"
              }
            ],
            "cpe_match": [],
            "operator": "AND"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-45485"
      }
    ]
  },
  "credits": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/credits#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Red Hat",
    "sources": [
      {
        "db": "PACKETSTORM",
        "id": "167097"
      },
      {
        "db": "PACKETSTORM",
        "id": "167622"
      },
      {
        "db": "PACKETSTORM",
        "id": "167072"
      },
      {
        "db": "PACKETSTORM",
        "id": "167330"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "169941"
      }
    ],
    "trust": 0.7
  },
  "cve": "CVE-2021-45485",
  "cvss": {
    "@context": {
      "cvssV2": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2"
      },
      "cvssV3": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/"
      },
      "severity": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/cvss/severity#"
        },
        "@id": "https://www.variotdbs.pl/ref/cvss/severity"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        },
        "@id": "https://www.variotdbs.pl/ref/sources"
      }
    },
    "data": [
      {
        "cvssV2": [
          {
            "acInsufInfo": false,
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "NVD",
            "availabilityImpact": "NONE",
            "baseScore": 5.0,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "obtainAllPrivilege": false,
            "obtainOtherPrivilege": false,
            "obtainUserPrivilege": false,
            "severity": "MEDIUM",
            "trust": 1.0,
            "userInteractionRequired": false,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          },
          {
            "acInsufInfo": null,
            "accessComplexity": "Low",
            "accessVector": "Network",
            "authentication": "None",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 5.0,
            "confidentialityImpact": "Partial",
            "exploitabilityScore": null,
            "id": "CVE-2021-45485",
            "impactScore": null,
            "integrityImpact": "None",
            "obtainAllPrivilege": null,
            "obtainOtherPrivilege": null,
            "obtainUserPrivilege": null,
            "severity": "Medium",
            "trust": 0.9,
            "userInteractionRequired": null,
            "vectorString": "AV:N/AC:L/Au:N/C:P/I:N/A:N",
            "version": "2.0"
          },
          {
            "accessComplexity": "LOW",
            "accessVector": "NETWORK",
            "authentication": "NONE",
            "author": "VULHUB",
            "availabilityImpact": "NONE",
            "baseScore": 5.0,
            "confidentialityImpact": "PARTIAL",
            "exploitabilityScore": 10.0,
            "id": "VHN-409116",
            "impactScore": 2.9,
            "integrityImpact": "NONE",
            "severity": "MEDIUM",
            "trust": 0.1,
            "vectorString": "AV:N/AC:L/AU:N/C:P/I:N/A:N",
            "version": "2.0"
          }
        ],
        "cvssV3": [
          {
            "attackComplexity": "LOW",
            "attackVector": "NETWORK",
            "author": "NVD",
            "availabilityImpact": "NONE",
            "baseScore": 7.5,
            "baseSeverity": "HIGH",
            "confidentialityImpact": "HIGH",
            "exploitabilityScore": 3.9,
            "impactScore": 3.6,
            "integrityImpact": "NONE",
            "privilegesRequired": "NONE",
            "scope": "UNCHANGED",
            "trust": 1.0,
            "userInteraction": "NONE",
            "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
            "version": "3.1"
          },
          {
            "attackComplexity": "Low",
            "attackVector": "Network",
            "author": "NVD",
            "availabilityImpact": "None",
            "baseScore": 7.5,
            "baseSeverity": "High",
            "confidentialityImpact": "High",
            "exploitabilityScore": null,
            "id": "CVE-2021-45485",
            "impactScore": null,
            "integrityImpact": "None",
            "privilegesRequired": "None",
            "scope": "Unchanged",
            "trust": 0.8,
            "userInteraction": "None",
            "vectorString": "CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
            "version": "3.0"
          }
        ],
        "severity": [
          {
            "author": "NVD",
            "id": "CVE-2021-45485",
            "trust": 1.8,
            "value": "HIGH"
          },
          {
            "author": "CNNVD",
            "id": "CNNVD-202112-2265",
            "trust": 0.6,
            "value": "HIGH"
          },
          {
            "author": "VULHUB",
            "id": "VHN-409116",
            "trust": 0.1,
            "value": "MEDIUM"
          },
          {
            "author": "VULMON",
            "id": "CVE-2021-45485",
            "trust": 0.1,
            "value": "MEDIUM"
          }
        ]
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-409116"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-45485"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202112-2265"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-45485"
      }
    ]
  },
  "description": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/description#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "In the IPv6 implementation in the Linux kernel before 5.13.3, net/ipv6/output_core.c has an information leak because of certain use of a hash table which, although big, doesn\u0027t properly consider that IPv6-based attackers can typically choose among many IPv6 source addresses. Linux Kernel Exists in the use of cryptographic algorithms.Information may be obtained. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n                   Red Hat Security Advisory\n\nSynopsis:          Important: kernel security, bug fix, and enhancement update\nAdvisory ID:       RHSA-2022:1988-01\nProduct:           Red Hat Enterprise Linux\nAdvisory URL:      https://access.redhat.com/errata/RHSA-2022:1988\nIssue date:        2022-05-10\nCVE Names:         CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 \n                   CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 \n                   CVE-2021-3669 CVE-2021-3743 CVE-2021-3744 \n                   CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 \n                   CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 \n                   CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 \n                   CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 \n                   CVE-2021-21781 CVE-2021-26401 CVE-2021-29154 \n                   CVE-2021-37159 CVE-2021-41864 CVE-2021-42739 \n                   CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 \n                   CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 \n                   CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 \n                   CVE-2022-0322 CVE-2022-1011 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 8. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder (v. 8) - aarch64, ppc64le, x86_64\nRed Hat Enterprise Linux BaseOS (v. 8) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nSecurity Fix(es):\n\n* kernel: fget: check that the fd still exists after getting a ref to it\n(CVE-2021-4083)\n\n* kernel: avoid cyclic entity chains due to malformed USB descriptors\n(CVE-2020-0404)\n\n* kernel: speculation on incompletely validated data on IBM Power9\n(CVE-2020-4788)\n\n* kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c\n(CVE-2020-13974)\n\n* kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a\nuse-after-free (CVE-2021-0941)\n\n* kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP()\n(CVE-2021-3612)\n\n* kernel: reading /proc/sysvipc/shm does not scale with large shared memory\nsegment counts (CVE-2021-3669)\n\n* kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c\n(CVE-2021-3743)\n\n* kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()\n(CVE-2021-3744)\n\n* kernel: possible use-after-free in bluetooth module (CVE-2021-3752)\n\n* kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg\nlimits and DoS attacks (CVE-2021-3759)\n\n* kernel: DoS in ccp_run_aes_gcm_cmd() function (CVE-2021-3764)\n\n* kernel: sctp: Invalid chunks may be used to remotely remove existing\nassociations (CVE-2021-3772)\n\n* kernel: lack of port sanity checking in natd and netfilter leads to\nexploit of OpenVPN clients (CVE-2021-3773)\n\n* kernel: possible leak or coruption of data residing on hugetlbfs\n(CVE-2021-4002)\n\n* kernel: security regression for CVE-2018-13405 (CVE-2021-4037)\n\n* kernel: Buffer overwrite in decode_nfs_fh function (CVE-2021-4157)\n\n* kernel: cgroup: Use open-time creds and namespace for migration perm\nchecks (CVE-2021-4197)\n\n* kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n(CVE-2021-4203)\n\n* kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed\npackets replies (CVE-2021-20322)\n\n* kernel: arm: SIGPAGE information disclosure vulnerability\n(CVE-2021-21781)\n\n* hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715 (CVE-2021-26401)\n\n* kernel: Local privilege escalation due to incorrect BPF JIT branch\ndisplacement computation (CVE-2021-29154)\n\n* kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c\n(CVE-2021-37159)\n\n* kernel: eBPF multiplication integer overflow in\nprealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to\nout-of-bounds write (CVE-2021-41864)\n\n* kernel: Heap buffer overflow in firedtv driver (CVE-2021-42739)\n\n* kernel: ppc: kvm: allows a malicious KVM guest to crash the host\n(CVE-2021-43056)\n\n* kernel: an array-index-out-bounds in detach_capi_ctr in\ndrivers/isdn/capi/kcapi.c (CVE-2021-43389)\n\n* kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c\nallows an attacker to cause DoS via crafted USB device (CVE-2021-43976)\n\n* kernel: use-after-free in the TEE subsystem (CVE-2021-44733)\n\n* kernel: information leak in the IPv6 implementation (CVE-2021-45485)\n\n* kernel: information leak in the IPv4 implementation (CVE-2021-45486)\n\n* hw: cpu: intel: Branch History Injection (BHI) (CVE-2022-0001)\n\n* hw: cpu: intel: Intra-Mode BTI (CVE-2022-0002)\n\n* kernel: Local denial of service in bond_ipsec_add_sa (CVE-2022-0286)\n\n* kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c\n(CVE-2022-0322)\n\n* kernel: FUSE allows UAF reads of write() buffers, allowing theft of\n(partial) /etc/shadow hashes (CVE-2022-1011)\n\n* kernel: use-after-free in nouveau kernel module (CVE-2020-27820)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nAdditional Changes:\n\nFor detailed information on changes in this release, see the Red Hat\nEnterprise Linux 8.6 Release Notes linked from the References section. \n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1888433 - CVE-2020-4788 kernel: speculation on incompletely validated data on IBM Power9\n1901726 - CVE-2020-27820 kernel: use-after-free in nouveau kernel module\n1919791 - CVE-2020-0404 kernel: avoid cyclic entity chains due to malformed USB descriptors\n1946684 - CVE-2021-29154 kernel: Local privilege escalation due to incorrect BPF JIT branch displacement computation\n1951739 - CVE-2021-42739 kernel: Heap buffer overflow in firedtv driver\n1957375 - [RFE] x86, tsc: Add kcmdline args for skipping tsc calibration sequences\n1974079 - CVE-2021-3612 kernel: joydev: zero size passed to joydev_handle_JSIOCSBTNMAP()\n1981950 - CVE-2021-21781 kernel: arm: SIGPAGE information disclosure vulnerability\n1983894 - Hostnetwork pod to service backed by hostnetwork on the same node is not working with OVN Kubernetes\n1985353 - CVE-2021-37159 kernel: use-after-free in hso_free_net_device() in drivers/net/usb/hso.c\n1986473 - CVE-2021-3669 kernel: reading /proc/sysvipc/shm does not scale with large shared memory segment counts\n1994390 - FIPS: deadlock between PID 1 and \"modprobe crypto-jitterentropy_rng\" at boot, preventing system to boot\n1997338 - block: update to upstream v5.14\n1997467 - CVE-2021-3764 kernel: DoS in ccp_run_aes_gcm_cmd() function\n1997961 - CVE-2021-3743 kernel: out-of-bound Read in qrtr_endpoint_post in net/qrtr/qrtr.c\n1999544 - CVE-2021-3752 kernel: possible use-after-free in bluetooth module\n1999675 - CVE-2021-3759 kernel: unaccounted ipc objects in Linux kernel lead to breaking memcg limits and DoS attacks\n2000627 - CVE-2021-3744 kernel: crypto: ccp - fix resource leaks in ccp_run_aes_gcm_cmd()\n2000694 - CVE-2021-3772 kernel: sctp: Invalid chunks may be used to remotely remove existing associations\n2004949 - CVE-2021-3773 kernel: lack of port sanity checking in natd and netfilter leads to exploit of OpenVPN clients\n2009312 - Incorrect system time reported by the cpu guest statistics (PPC only). \n2009521 - XFS: sync to upstream v5.11\n2010463 - CVE-2021-41864 kernel: eBPF multiplication integer overflow in prealloc_elems_and_freelist() in kernel/bpf/stackmap.c leads to out-of-bounds write\n2011104 - statfs reports wrong free space for small quotas\n2013180 - CVE-2021-43389 kernel: an array-index-out-bounds in detach_capi_ctr in drivers/isdn/capi/kcapi.c\n2014230 - CVE-2021-20322 kernel: new DNS Cache Poisoning Attack based on ICMP fragment needed packets replies\n2015525 - SCTP peel-off with SELinux and containers in OCP\n2015755 - zram: zram leak with warning when running zram02.sh in ltp\n2016169 - CVE-2020-13974 kernel: integer overflow in k_ascii() in drivers/tty/vt/keyboard.c\n2017073 - CVE-2021-43056 kernel: ppc: kvm: allows a malicious KVM guest to crash the host\n2017796 - ceph omnibus backport for RHEL-8.6.0\n2018205 - CVE-2021-0941 kernel: out-of-bounds read in bpf_skb_change_head() of filter.c due to a use-after-free\n2022814 - Rebase the input and HID stack in 8.6 to v5.15\n2025003 - CVE-2021-43976 kernel: mwifiex_usb_recv() in drivers/net/wireless/marvell/mwifiex/usb.c allows an attacker to cause DoS via crafted USB device\n2025726 - CVE-2021-4002 kernel: possible leak or coruption of data residing on hugetlbfs\n2027239 - CVE-2021-4037 kernel: security regression for CVE-2018-13405\n2029923 - CVE-2021-4083 kernel: fget: check that the fd still exists after getting a ref to it\n2030476 - Kernel 4.18.0-348.2.1 secpath_cache memory leak involving strongswan tunnel\n2030747 - CVE-2021-44733 kernel: use-after-free in the TEE subsystem\n2031200 - rename(2) fails on subfolder mounts when the share path has a trailing slash\n2034342 - CVE-2021-4157 kernel: Buffer overwrite in decode_nfs_fh function\n2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks\n2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n2037019 - CVE-2022-0286 kernel: Local denial of service in bond_ipsec_add_sa\n2039911 - CVE-2021-45485 kernel: information leak in the IPv6 implementation\n2039914 - CVE-2021-45486 kernel: information leak in the IPv4 implementation\n2042798 - [RHEL8.6][sfc] General sfc driver update\n2042822 - CVE-2022-0322 kernel: DoS in sctp_addto_chunk in net/sctp/sm_make_chunk.c\n2043453 - [RHEL8.6 wireless] stack \u0026 drivers general update to v5.16+\n2046021 - kernel 4.18.0-358.el8 async dirops causes write errors with namespace restricted caps\n2048251 - Selinux  is not  allowing SCTP connection setup between inter pod communication in enforcing mode\n2061700 - CVE-2021-26401 hw: cpu: LFENCE/JMP Mitigation Update for CVE-2017-5715\n2061712 - CVE-2022-0001 hw: cpu: intel: Branch History Injection (BHI)\n2061721 - CVE-2022-0002 hw: cpu: intel: Intra-Mode BTI\n2064855 - CVE-2022-1011 kernel: FUSE allows UAF reads of write() buffers, allowing theft of (partial) /etc/shadow hashes\n\n6. Package List:\n\nRed Hat Enterprise Linux BaseOS (v. 8):\n\nSource:\nkernel-4.18.0-372.9.1.el8.src.rpm\n\naarch64:\nbpftool-4.18.0-372.9.1.el8.aarch64.rpm\nbpftool-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-core-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-cross-headers-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-core-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-devel-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-modules-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-modules-extra-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-devel-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-headers-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-modules-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-modules-extra-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-libs-4.18.0-372.9.1.el8.aarch64.rpm\nperf-4.18.0-372.9.1.el8.aarch64.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\npython3-perf-4.18.0-372.9.1.el8.aarch64.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\n\nnoarch:\nkernel-abi-stablelists-4.18.0-372.9.1.el8.noarch.rpm\nkernel-doc-4.18.0-372.9.1.el8.noarch.rpm\n\nppc64le:\nbpftool-4.18.0-372.9.1.el8.ppc64le.rpm\nbpftool-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-core-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-cross-headers-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-core-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-devel-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-modules-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-modules-extra-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-devel-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-headers-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-modules-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-modules-extra-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-libs-4.18.0-372.9.1.el8.ppc64le.rpm\nperf-4.18.0-372.9.1.el8.ppc64le.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\npython3-perf-4.18.0-372.9.1.el8.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\n\ns390x:\nbpftool-4.18.0-372.9.1.el8.s390x.rpm\nbpftool-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-4.18.0-372.9.1.el8.s390x.rpm\nkernel-core-4.18.0-372.9.1.el8.s390x.rpm\nkernel-cross-headers-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-core-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-devel-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-modules-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debug-modules-extra-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-debuginfo-common-s390x-4.18.0-372.9.1.el8.s390x.rpm\nkernel-devel-4.18.0-372.9.1.el8.s390x.rpm\nkernel-headers-4.18.0-372.9.1.el8.s390x.rpm\nkernel-modules-4.18.0-372.9.1.el8.s390x.rpm\nkernel-modules-extra-4.18.0-372.9.1.el8.s390x.rpm\nkernel-tools-4.18.0-372.9.1.el8.s390x.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-core-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-devel-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-modules-4.18.0-372.9.1.el8.s390x.rpm\nkernel-zfcpdump-modules-extra-4.18.0-372.9.1.el8.s390x.rpm\nperf-4.18.0-372.9.1.el8.s390x.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\npython3-perf-4.18.0-372.9.1.el8.s390x.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.s390x.rpm\n\nx86_64:\nbpftool-4.18.0-372.9.1.el8.x86_64.rpm\nbpftool-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-core-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-cross-headers-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-core-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-devel-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-modules-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-modules-extra-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-devel-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-headers-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-modules-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-modules-extra-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-libs-4.18.0-372.9.1.el8.x86_64.rpm\nperf-4.18.0-372.9.1.el8.x86_64.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\npython3-perf-4.18.0-372.9.1.el8.x86_64.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\n\nRed Hat CodeReady Linux Builder (v. 8):\n\naarch64:\nbpftool-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\nkernel-tools-libs-devel-4.18.0-372.9.1.el8.aarch64.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.aarch64.rpm\n\nppc64le:\nbpftool-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\nkernel-tools-libs-devel-4.18.0-372.9.1.el8.ppc64le.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\nkernel-tools-libs-devel-4.18.0-372.9.1.el8.x86_64.rpm\nperf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\npython3-perf-debuginfo-4.18.0-372.9.1.el8.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security.  Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. References:\n\nhttps://access.redhat.com/security/cve/CVE-2020-0404\nhttps://access.redhat.com/security/cve/CVE-2020-4788\nhttps://access.redhat.com/security/cve/CVE-2020-13974\nhttps://access.redhat.com/security/cve/CVE-2020-27820\nhttps://access.redhat.com/security/cve/CVE-2021-0941\nhttps://access.redhat.com/security/cve/CVE-2021-3612\nhttps://access.redhat.com/security/cve/CVE-2021-3669\nhttps://access.redhat.com/security/cve/CVE-2021-3743\nhttps://access.redhat.com/security/cve/CVE-2021-3744\nhttps://access.redhat.com/security/cve/CVE-2021-3752\nhttps://access.redhat.com/security/cve/CVE-2021-3759\nhttps://access.redhat.com/security/cve/CVE-2021-3764\nhttps://access.redhat.com/security/cve/CVE-2021-3772\nhttps://access.redhat.com/security/cve/CVE-2021-3773\nhttps://access.redhat.com/security/cve/CVE-2021-4002\nhttps://access.redhat.com/security/cve/CVE-2021-4037\nhttps://access.redhat.com/security/cve/CVE-2021-4083\nhttps://access.redhat.com/security/cve/CVE-2021-4157\nhttps://access.redhat.com/security/cve/CVE-2021-4197\nhttps://access.redhat.com/security/cve/CVE-2021-4203\nhttps://access.redhat.com/security/cve/CVE-2021-20322\nhttps://access.redhat.com/security/cve/CVE-2021-21781\nhttps://access.redhat.com/security/cve/CVE-2021-26401\nhttps://access.redhat.com/security/cve/CVE-2021-29154\nhttps://access.redhat.com/security/cve/CVE-2021-37159\nhttps://access.redhat.com/security/cve/CVE-2021-41864\nhttps://access.redhat.com/security/cve/CVE-2021-42739\nhttps://access.redhat.com/security/cve/CVE-2021-43056\nhttps://access.redhat.com/security/cve/CVE-2021-43389\nhttps://access.redhat.com/security/cve/CVE-2021-43976\nhttps://access.redhat.com/security/cve/CVE-2021-44733\nhttps://access.redhat.com/security/cve/CVE-2021-45485\nhttps://access.redhat.com/security/cve/CVE-2021-45486\nhttps://access.redhat.com/security/cve/CVE-2022-0001\nhttps://access.redhat.com/security/cve/CVE-2022-0002\nhttps://access.redhat.com/security/cve/CVE-2022-0286\nhttps://access.redhat.com/security/cve/CVE-2022-0322\nhttps://access.redhat.com/security/cve/CVE-2022-1011\nhttps://access.redhat.com/security/updates/classification/#important\nhttps://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/\n\n8. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYnqSF9zjgjWX9erEAQjBXQ/8DSpFUMNN6ZVFtli2KuVowVLS+14J0jtj\n0zxpr0skJT8vVulU3VTeURBMdg9NAo9bj3R5KTk2+dC+AMuHET5aoVvaYmimBGKL\n5qzpu7q9Z0aaD2I288suHCnYuRJnt+qKZtNa4hlcY92bN0tcYBonxsdIS2xM6xIu\nGHNS8HNVUNz4PuCBfmbITvgX9Qx+iZQVlVccDBG5LDpVwgOtnrxHKbe5E499v/9M\noVoN+eV9ulHAZdCHWlUAahbsvEqDraCKNT0nHq/xO5dprPjAcjeKYMeaICtblRr8\nk+IouGywaN+mW4sBjnaaiuw2eAtoXq/wHisX1iUdNkroqcx9NBshWMDBJnE4sxQJ\nZOSc8B6yjJItPvUI7eD3BDgoka/mdoyXTrg+9VRrir6vfDHPrFySLDrO1O5HM5fO\n3sExCVO2VM7QMCGHJ1zXXX4szk4SV/PRsjEesvHOyR2xTKZZWMsXe1h9gYslbADd\ntW0yco/G23xjxqOtMKuM/nShBChflMy9apssldiOfdqODJMv5d4rRpt0xgmtSOM6\nqReveuQCasmNrGlAHgDwbtWz01fmSuk9eYDhZNmHA3gxhoHIV/y+wr0CLbOQtDxT\np79nhiqwUo5VMj/X30Lu0Wl3ptLuhRWamzTCkEEzdubr8aVsT4RRNQU3KfVFfpT1\nMWp/2ui3i80=\n=Fdgy\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. 8) - x86_64\n\n3. Description:\n\nThe kernel-rt packages provide the Real Time Linux Kernel, which enables\nfine-tuning for systems with extremely high determinism requirements. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console\n2040693 - ?Replication repository? wizard has no validation for name length\n2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com?\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace\n2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. \n2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade\n2061335 - [MTC UI] ?Update cluster? button is not getting disabled\n2062266 - MTC UI does not display logs properly [OADP-BL]\n2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend\n2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2076593 - Velero pod log missing from UI  drop down\n2076599 - Velero pod log missing from downloaded logs folder [OADP-BL]\n2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan\n2079252 - [MTC]  Rsync options logs not visible in log-reader pod\n2082221 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [UI]\n2082225 - non-numeric user when launching stage pods [OADP-BL]\n2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments\n2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods\n2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels\n2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL]\n2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts\n2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL]\n2096939 - Fix legacy operator.yml inconsistencies and errors\n2100486 - [MTC UI] Target storage class field is not getting respected when clusters don\u0027t have replication repo configured. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.5.0 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. See\nthe following Release Notes documentation, which will be updated shortly\nfor this release, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/\n\nSecurity fixes: \n\n* nodejs-json-schema: Prototype pollution vulnerability (CVE-2021-3918)\n\n* containerd: Unprivileged pod may bind mount any privileged regular file\non disk (CVE-2021-43816)\n\n* minio: user privilege escalation in AddUser() admin API (CVE-2021-43858)\n\n* openssl: Infinite loop in BN_mod_sqrt() reachable when parsing\ncertificates (CVE-2022-0778)\n\n* imgcrypt: Unauthorized access to encryted container image on a shared\nsystem due to missing check in CheckAuthorization() code path\n(CVE-2022-24778)\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* node-fetch: exposure of sensitive information to an unauthorized actor\n(CVE-2022-0235)\n\n* nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* golang: crash in a golang.org/x/crypto/ssh server (CVE-2022-27191)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\n* opencontainers: OCI manifest and index parsing confusion (CVE-2021-41190)\n\nBug fixes:\n\n* RFE Copy secret with specific secret namespace, name for source and name,\nnamespace and cluster label for target (BZ# 2014557)\n\n* RHACM 2.5.0 images (BZ# 2024938)\n\n* [UI] When you delete host agent from infraenv no confirmation message\nappear (Are you sure you want to delete x?) (BZ#2028348)\n\n* Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller\nnot working properly (BZ# 2028647)\n\n* create cluster pool -\u003e choose infra type, As a result infra providers\ndisappear from UI. (BZ# 2033339)\n\n* Restore/backup shows up as Validation failed but the restore backup\nstatus in ACM shows success (BZ# 2034279)\n\n* Observability - OCP 311 node role are not displayed completely (BZ#\n2038650)\n\n* Documented uninstall procedure leaves many leftovers (BZ# 2041921)\n\n* infrastructure-operator pod crashes due to insufficient privileges in ACM\n2.5 (BZ# 2046554)\n\n* Acm failed to install due to some missing CRDs in operator (BZ# 2047463)\n\n* Navigation icons no longer showing in ACM 2.5 (BZ# 2051298)\n\n* ACM home page now includes /home/ in url (BZ# 2051299)\n\n* proxy heading in Add Credential should be capitalized (BZ# 2051349)\n\n* ACM 2.5 tries to create new MCE instance when install on top of existing\nMCE 2.0 (BZ# 2051983)\n\n* Create Policy button does not work and user cannot use console to create\npolicy (BZ# 2053264)\n\n* No cluster information was displayed after a policyset was created (BZ#\n2053366)\n\n* Dynamic plugin update does not take effect in Firefox (BZ# 2053516)\n\n* Replicated policy should not be available when creating a Policy Set (BZ#\n2054431)\n\n* Placement section in Policy Set wizard does not reset when users click\n\"Back\" to re-configured placement (BZ# 2054433)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n2014557 - RFE Copy secret with specific secret namespace, name for source and name, namespace and cluster label for target\n2024702 - CVE-2021-3918 nodejs-json-schema: Prototype pollution vulnerability\n2024938 - CVE-2021-41190 opencontainers: OCI manifest and index parsing confusion\n2028224 - RHACM 2.5.0 images\n2028348 - [UI] When you delete host agent from infraenv no confirmation message appear (Are you sure you want to delete x?)\n2028647 - Clusters are in \u0027Degraded\u0027 status with upgrade env due to obs-controller not working properly\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2033339 - create cluster pool -\u003e choose infra type , As a result infra providers disappear from UI. \n2034279 - Restore/backup shows up as Validation failed but the restore backup status in ACM shows success\n2036252 - CVE-2021-43858 minio: user privilege escalation in AddUser() admin API\n2038650 - Observability - OCP 311 node role are not displayed completely\n2041921 - Documented uninstall procedure leaves many leftovers\n2044434 - CVE-2021-43816 containerd: Unprivileged pod may bind mount any privileged regular file on disk\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2046554 - infrastructure-operator pod crashes due to insufficient privileges in ACM 2.5\n2047463 - Acm failed to install due to some missing CRDs in operator\n2051298 - Navigation icons no longer showing in ACM 2.5\n2051299 - ACM home page now includes /home/ in url\n2051349 - proxy heading in Add Credential should be capitalized\n2051983 - ACM 2.5 tries to create new MCE instance when install on top of existing MCE 2.0\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature  authenticated user can obtain the privileges of the System account\n2053264 - Create Policy button does not work and user cannot use console to create policy\n2053366 - No cluster information was displayed after a policyset was created\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n2053516 - Dynamic plugin update does not take effect in Firefox\n2054431 - Replicated policy should not be available when creating a Policy Set\n2054433 - Placement section in Policy Set wizard does not reset when users click \"Back\" to re-configured placement\n2054772 - credentialName is not parsed correctly in UI notifications/alerts when creating/updating a discovery config\n2054860 - Cluster overview page crashes for on-prem cluster\n2055333 - Unable to delete assisted-service operator\n2055900 - If MCH is installed on existing MCE and both are in multicluster-engine namespace , uninstalling MCH terminates multicluster-engine namespace\n2056485 - [UI]  In infraenv detail the host list don\u0027t have pagination\n2056701 - Non platform install fails agentclusterinstall CRD is outdated in rhacm2.5\n2057060 - [CAPI] Unable to create ClusterDeployment due to service account restrictions (ACM + Bundled Assisted)\n2058435 - Label cluster.open-cluster-management.io/backup-cluster stamped \u0027unknown\u0027 for velero backups\n2059779 - spec.nodeSelector is missing in MCE instance created by MCH upon installing ACM on infra nodes\n2059781 - Policy UI crashes when viewing details of configuration policies for backupschedule that does not exist\n2060135 - [assisted-install] agentServiceConfig left orphaned after uninstalling ACM\n2060151 - Policy set of the same name cannot be re-created after the previous one has been deleted\n2060230 - [UI] Delete host modal has incorrect host\u0027s name populated\n2060309 - multiclusterhub stuck in installing on \"ManagedClusterConditionAvailable\" [intermittent]\n2060469 - The development branch of the Submariner addon deploys 0.11.0, not 0.12.0\n2060550 - MCE installation hang due to no console-mce-console deployment available\n2060603 - prometheus doesn\u0027t display managed clusters\n2060831 - Observability - prometheus-operator failed to start on *KS\n2060934 - Cannot provision AWS OCP 4.9 cluster from Power Hub\n2061260 - The value of the policyset placement should be filtered space when input cluster label expression\n2061311 - Cleanup of installed spoke clusters hang on deletion of spoke namespace\n2061659 - the network section in create cluster -\u003e Networking include the brace in the network title\n2061798 - [ACM 2.5] The service of Cluster Proxy addon was missing\n2061838 - ACM component subscriptions are removed when enabling spec.disableHubSelfManagement in MCH\n2062009 - No name validation is performed on Policy and Policy Set Wizards\n2062022 - cluster.open-cluster-management.io/backup-cluster of velero schedules should populate the corresponding hub clusterID\n2062025 - No validation is done on yaml\u0027s format or content in Policy and Policy Set wizards\n2062202 - CVE-2022-0778 openssl: Infinite loop in BN_mod_sqrt() reachable when parsing certificates\n2062337 - velero schedules get re-created after the backupschedule is in \u0027BackupCollision\u0027 phase\n2062462 - Upgrade to 2.5 hang due to irreconcilable errors of grc-sub and search-prod-sub in MCH\n2062556 - Always return the policyset page after created the policy from UI\n2062787 - Submariner Add-on UI does not indicate on Broker error\n2063055 - User with cluserrolebinding of open-cluster-management:cluster-manager-admin role can\u0027t see policies and clusters page\n2063341 - Release imagesets are missing in the console for ocp 4.10\n2063345 - Application Lifecycle- UI shows white blank page when the page is Refreshed\n2063596 - claim clusters from clusterpool throws errors\n2063599 - Update the message in clusterset -\u003e clusterpool page since we did not allow to add clusterpool to clusterset by resourceassignment\n2063697 - Observability - MCOCR reports object-storage secret without AWS access_key in STS enabled env\n2064231 - Can not clean the instance type for worker pool when create the clusters\n2064247 - prefer UI can add the architecture type when create the cluster\n2064392 - multicloud oauth-proxy failed to log users in on web\n2064477 - Click at \"Edit Policy\" for each policy leads to a blank page\n2064509 - No option to view the ansible job details and its history in the Automation wizard after creation of the automation job\n2064516 - Unable to delete an automation job of a policy\n2064528 - Columns of Policy Set, Status and Source on Policy page are not sortable\n2064535 - Different messages on the empty pages of Overview and Clusters when policy is disabled\n2064702 - CVE-2022-27191 golang: crash in a golang.org/x/crypto/ssh server\n2064722 - [Tracker] [DR][ACM 2.5] Applications are not getting deployed on managed cluster\n2064899 - Failed to provision openshift 4.10 on bare metal\n2065436 - \"Filter\" drop-down list does not show entries of the policies that have no top-level remediation specified\n2066198 - Issues about disabled policy from UI\n2066207 - The new created policy should be always shown up on the first line\n2066333 - The message was confuse when the cluster status is Running\n2066383 - MCE install failing on proxy disconnected environment\n2066433 - Logout not working for ACM 2.5\n2066464 - console-mce-console pods throw ImagePullError after upgrading to ocp 4.10\n2066475 - User with view-only rolebinding should not be allowed to create policy, policy set and automation job\n2066544 - The search box can\u0027t work properly in Policies page\n2066594 - RFE:  Can\u0027t open the helm source link of the backup-restore-enabled policy from UI\n2066650 - minor issues in cluster curator due to the startup throws errors\n2066751 - the image repo of application-manager did not updated to use the image repo in MCE/MCH configuration\n2066834 - Hibernating cluster(s) in cluster pool stuck in \u0027Stopping\u0027 status after restore activation\n2066842 - cluster pool credentials are not backed up\n2066914 - Unable to remove cluster value during configuration of the label expressions for policy and policy set\n2066940 - Validation fired out for https proxy when the link provided not starting with https\n2066965 - No message is displayed in Policy Wizard to indicate a policy externally managed\n2066979 - MIssing groups in policy filter options comparing to previous RHACM version\n2067053 - I was not able to remove the image mirror content when create the cluster\n2067067 - Can\u0027t filter the cluster info when clicked the cluster in the Placement section\n2067207 - Bare metal asset secrets are not backed up\n2067465 - Categories,Standards, and Controls annotations are not updated after user has deleted a selected template\n2067713 - Columns on policy\u0027s \"Results\" are not sort-able as in previous release\n2067728 - Can\u0027t search in the policy creation or policyset creation Yaml editor\n2068304 - Application Lifecycle- Replicasets arent showing the logs console in Topology\n2068309 - For policy wizard in dynamics plugin environment, buttons at the bottom should be sticky and the contents of the Policy should scroll\n2068312 - Application Lifecycle - Argo Apps are not showing overview details and topology after upgrading from 2.4\n2068313 - Application Lifecycle - Refreshing overview page leads to a blank page\n2068328 - A cluster\u0027s \"View history\" page should not contain all clusters\u0027 violations history\n2068387 - Observability - observability operator always CrashLoopBackOff in FIPS upgrading hub\n2068993 - Observability - Node list is not filtered according to nodeType on OCP 311 dashboard\n2069329 - config-policy-controller addon with \"Unknown\" status in OCP 3.11 managed cluster after upgrade hub to 2.5\n2069368 - CVE-2022-24778 imgcrypt: Unauthorized access to encryted container image on a shared system due to missing check in CheckAuthorization() code path\n2069469 - Status of unreachable clusters is not reported in several places on GRC panels\n2069615 - The YAML editor can\u0027t work well when login UI using dynamic console plugin\n2069622 - No validation for policy template\u0027s name\n2069698 - After claim a cluster from clusterpool, the cluster pages become very very slow\n2069867 - Error occurs when trying to edit an application set/subscription\n2069870 - ACM/MCE Dynamic Plugins - 404: Page Not Found Error Occurs - intermittent crashing\n2069875 - Cluster secrets are not being created in the managed cluster\u0027s namespace\n2069895 - Application Lifecycle - Replicaset and Pods gives error messages when Yaml is selected on sidebar\n2070203 - Blank Application is shown when editing an Application with AnsibleJobs\n2070782 - Failed Secret Propagation to the Same Namespace as the AnsibleJob CR\n2070846 - [ACM 2.5] Can\u0027t re-add the default clusterset label after removing it from a managedcluster on BM SNO hub\n2071066 - Policy set details panel does not work when deployed into namespace different than \"default\"\n2071173 - Configured RunOnce automation job is not displayed although the policy has no violation\n2071191 - MIssing title on details panel after clicking \"view details\" of a policy set card\n2071769 - Placement must be always configured or error is reported when creating a policy\n2071818 - ACM logo not displayed in About info modal\n2071869 - Topology includes the status of local cluster resources when Application is only deployed to managed cluster\n2072009 - CVE-2022-24785 Moment.js: Path traversal  in moment.locale\n2072097 - Local Cluster is shown as Remote on the Application Overview Page and Single App Overview Page\n2072104 - Inconsistent \"Not Deployed\" Icon Used Between 2.4 and 2.5 as well as the Overview and Topology\n2072177 - Cluster Resource Status is showing App Definition Statuses as well\n2072227 - Sidebar Statuses Need to Be Updated to Reflect Cluster List and Cluster Resource Statuses\n2072231 - Local Cluster not included in the appsubreport for Helm Applications Deployed on All Clusters\n2072334 - Redirect URL is now to the details page after created a policy\n2072342 - Shows \"NaN%\" in the ring chart when add the disabled policy into policyset and view its details\n2072350 - CRD Deployed via Application Console does not have correct deployment status and spelling\n2072359 - Report the error when editing compliance type in the YAML editor and then submit the changes\n2072504 - The policy has violations on the failed managed cluster\n2072551 - URL dropdown is not being rendered with an Argo App with a new URL\n2072773 - When a channel is deleted and recreated through the App Wizard, application creation stalls and warning pops up\n2072824 - The edit/delete policyset button should be greyed when using viewer check\n2072829 - When Argo App with jsonnet object is deployed, topology and cluster status would fail to display the correct statuses. \n2073179 - Policy controller was unable to retrieve violation status in for an OCP 3.11 managed cluster on ARM hub\n2073330 - Observabilityy - memory usage data are not collected even collect rule is fired on SNO\n2073355 - Get blank page when click policy with unknown status in Governance -\u003e Overview page\n2073508 - Thread responsible to get insights data from *ks clusters is broken\n2073557 - appsubstatus is not deleted for Helm applications when changing between 2 managed clusters\n2073726 - Placement of First Subscription gets overlapped by the Cluster Node in Application Topology\n2073739 - Console/App LC - Error message saying resource conflict only shows up in standalone ACM but not in Dynamic plugin\n2073740 - Console/App LC- Apps are deployed even though deployment do not proceed because of \"resource conflict\" error\n2074178 - Editing Helm Argo Applications does not Prune Old Resources\n2074626 - Policy placement failure during ZTP SNO scale test\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2074803 - The import cluster YAML editor shows the klusterletaddonconfig was required on MCE portal\n2074937 - UI allows creating cluster even when there are no ClusterImageSets\n2075416 - infraEnv failed to create image after restore\n2075440 - The policyreport CR is created for spoke clusters until restarted the insights-client pod\n2075739 - The lookup function won\u0027t check the referred resource whether exist when using template policies\n2076421 - Can\u0027t select existing placement for policy or policyset when editing policy or policyset\n2076494 - No policyreport CR for spoke clusters generated  in the disconnected env\n2076502 - The policyset card doesn\u0027t show the cluster status(violation/without violation) again after deleted one policy\n2077144 - GRC Ansible automation wizard does not display error of missing dependent Ansible Automation Platform operator\n2077149 - App UI shows no clusters cluster column of App Table when Discovery Applications is deployed to a managed cluster\n2077291 - Prometheus doesn\u0027t display acm_managed_cluster_info after upgrade from 2.4 to 2.5\n2077304 - Create Cluster button is disabled only if other clusters exist\n2077526 - ACM UI is very very slow after upgrade from 2.4 to 2.5\n2077562 - Console/App LC- Helm and Object bucket applications are not showing as deployed in the UI\n2077751 - Can\u0027t create a template policy from UI when the object\u0027s name is referring Golang text template syntax in this policy\n2077783 - Still show violation for clusterserviceversions after enforced \"Detect Image vulnerabilities \" policy template and the operator is installed\n2077951 - Misleading message indicated that a placement of a policy became one managed only by policy set\n2078164 - Failed to edit a policy without placement\n2078167 - Placement binding and rule names are not created in yaml when editing a policy previously created with no placement\n2078373 - Disable the hyperlink of *ks node in standalone MCE environment since the search component was not exists\n2078617 - Azure public credential details get pre-populated with base domain name in UI\n2078952 - View pod logs in search details returns error\n2078973 - Crashed pod is marked with success in Topology\n2079013 - Changing existing placement rules does not change YAML file\n2079015 - Uninstall pod crashed when destroying Azure Gov cluster in ACM\n2079421 - Hyphen(s) is deleted unexpectedly in UI when yaml is turned on\n2079494 - Hitting Enter in yaml editor caused unexpected keys \"key00x:\" to be created\n2079533 - Clusters with no default clusterset do not get assigned default cluster when upgrading from ACM 2.4 to 2.5\n2079585 - When an Ansible Secret is propagated to an Ansible Application namespace, the propagated secret is shown in the Credentials page\n2079611 - Edit appset placement in UI with a different existing placement causes the current associated placement being deleted\n2079615 - Edit appset placement in UI with a new placement throws error upon submitting\n2079658 - Cluster Count is Incorrect in Application UI\n2079909 - Wrong message is displayed when GRC fails to connect to an ansible tower\n2080172 - Still create policy automation successfully when the PolicyAutomation name exceed 63 characters\n2080215 - Get a blank page after go to policies page in upgraded env when using an user with namespace-role-binding of default view role\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080503 - vSphere network name doesn\u0027t allow entering spaces and doesn\u0027t reflect YAML changes\n2080567 - Number of cluster in violation in the table does not match other cluster numbers on the policy set details page\n2080712 - Select an existing placement configuration does not work\n2080776 - Unrecognized characters are displayed on policy and policy set yaml editors\n2081792 - When deploying an application to a clusterpool claimed cluster after upgrade, the application does not get deployed to the cluster\n2081810 - Type \u0027-\u0027 character in Name field caused previously typed character backspaced in  in the name field of policy wizard\n2081829 - Application deployed on local cluster\u0027s topology is crashing after upgrade\n2081938 - The deleted policy still be shown on the policyset review page when edit this policy set\n2082226 - Object Storage Topology includes residue of resources after Upgrade\n2082409 - Policy set details panel remains even after the policy set has been deleted\n2082449 - The hypershift-addon-agent deployment did not have imagePullSecrets\n2083038 - Warning still refers to the `klusterlet-addon-appmgr` pod rather than the `application-manager` pod\n2083160 - When editing a helm app with failing resources to another, the appsubstatus and the managedclusterview do not get updated\n2083434 - The provider-credential-controller did not support the RHV credentials type\n2083854 - When deploying an application with ansiblejobs multiple times with different namespaces, the topology shows all the ansiblejobs rather than just the one within the namespace\n2083870 - When editing an existing application and refreshing the `Select an existing placement configuration`, multiple occurrences of the placementrule gets displayed\n2084034 - The status message looks messy in the policy set card, suggest one kind status one a row\n2084158 - Support provisioning bm cluster where no provisioning network provided\n2084622 - Local Helm application shows cluster resources as `Not Deployed` in Topology [Upgrade]\n2085083 - Policies fail to copy to cluster namespace after ACM upgrade\n2085237 - Resources referenced by a channel are not annotated with backup label\n2085273 - Error querying for ansible job in app topology\n2085281 - Template name error is reported but the template name was found in a different replicated policy\n2086389 - The policy violations for hibernated cluster still be displayed on the policy set details page\n2087515 - Validation thrown out in configuration for disconnect install while creating bm credential\n2088158 - Object Storage Application deployed to all clusters is showing unemployed in topology [Upgrade]\n2088511 - Some cluster resources are not showing labels that are defined in the YAML\n\n5. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.8.53. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2022:7873\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html\n\nSecurity Fix(es):\n\n* go-getter: command injection vulnerability (CVE-2022-26945)\n* go-getter: unsafe download (issue 1 of 3) (CVE-2022-30321)\n* go-getter: unsafe download (issue 2 of 3) (CVE-2022-30322)\n* go-getter: unsafe download (issue 3 of 3) (CVE-2022-30323)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s)\nlisted in the References section. Solution:\n\nFor OpenShift Container Platform 4.8 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html\n\nYou may download the oc tool and use it to inspect release image metadata\nfor x86_64, s390x, and ppc64le architectures. The image digests\nmay be found at\nhttps://quay.io/repository/openshift-release-dev/ocp-release?tab=tags\n\nThe sha values for the release are:\n\n(For x86_64 architecture)\nThe image digest is\nsha256:ac2bbfa7036c64bbdb44f9a74df3dbafcff1b851d812bf2a48c4fabcac3c7a53\n\n(For s390x architecture)\nThe image digest is\nsha256:ac2c74a664257cea299126d4f789cdf9a5a4efc4a4e8c2361b943374d4eb21e4\n\n(For ppc64le architecture)\nThe image digest is\nsha256:53adc42ed30ad39d7117837dbf5a6db6943a8f0b3b61bc0d046b83394f5c28b2\n\nAll OpenShift Container Platform 4.8 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.8/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2077100 - Console backend check for Web Terminal Operator incorrectly returns HTTP 204\n2092918 - CVE-2022-30321 go-getter: unsafe download (issue 1 of 3)\n2092923 - CVE-2022-30322 go-getter: unsafe download (issue 2 of 3)\n2092925 - CVE-2022-30323 go-getter: unsafe download (issue 3 of 3)\n2092928 - CVE-2022-26945 go-getter: command injection vulnerability\n\n5. JIRA issues fixed (https://issues.jboss.org/):\n\nOCPBUGS-2205 - Prefer local dns does not work expectedly on OCPv4.8\nOCPBUGS-2347 - [cluster-api-provider-baremetal] fix 4.8 build\nOCPBUGS-2577 - [4.8] ETCD Operator goes degraded when a second internal node ip is added\nOCPBUGS-2773 - e2e tests: Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name\nOCPBUGS-2989 - [4.8] cri-o should report the stage of container and pod creation it\u0027s stuck at\n\n6",
    "sources": [
      {
        "db": "NVD",
        "id": "CVE-2021-45485"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "db": "VULHUB",
        "id": "VHN-409116"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-45485"
      },
      {
        "db": "PACKETSTORM",
        "id": "167097"
      },
      {
        "db": "PACKETSTORM",
        "id": "167622"
      },
      {
        "db": "PACKETSTORM",
        "id": "167072"
      },
      {
        "db": "PACKETSTORM",
        "id": "167330"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "169941"
      }
    ],
    "trust": 2.43
  },
  "external_ids": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/external_ids#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "db": "NVD",
        "id": "CVE-2021-45485",
        "trust": 4.1
      },
      {
        "db": "PACKETSTORM",
        "id": "169941",
        "trust": 0.8
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434",
        "trust": 0.8
      },
      {
        "db": "PACKETSTORM",
        "id": "169695",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169997",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "169719",
        "trust": 0.7
      },
      {
        "db": "PACKETSTORM",
        "id": "166101",
        "trust": 0.6
      },
      {
        "db": "PACKETSTORM",
        "id": "169411",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0205",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5536",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.1225",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6062",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0215",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2023.0061",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0380",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.6111",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.2855",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0615",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3236",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.3136",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.0121",
        "trust": 0.6
      },
      {
        "db": "AUSCERT",
        "id": "ESB-2022.5590",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022070643",
        "trust": 0.6
      },
      {
        "db": "CS-HELP",
        "id": "SB2022062931",
        "trust": 0.6
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202112-2265",
        "trust": 0.6
      },
      {
        "db": "VULHUB",
        "id": "VHN-409116",
        "trust": 0.1
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-45485",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167097",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167622",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167072",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167330",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167679",
        "trust": 0.1
      },
      {
        "db": "PACKETSTORM",
        "id": "167459",
        "trust": 0.1
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-409116"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-45485"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "db": "PACKETSTORM",
        "id": "167097"
      },
      {
        "db": "PACKETSTORM",
        "id": "167622"
      },
      {
        "db": "PACKETSTORM",
        "id": "167072"
      },
      {
        "db": "PACKETSTORM",
        "id": "167330"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "169941"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202112-2265"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-45485"
      }
    ]
  },
  "id": "VAR-202112-2255",
  "iot": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/iot#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": true,
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-409116"
      }
    ],
    "trust": 0.01
  },
  "last_update_date": "2024-07-23T20:30:28.280000Z",
  "patch": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/patch#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "title": "NTAP-20220121-0001",
        "trust": 0.8,
        "url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/changelog-5.13.3"
      },
      {
        "title": "Linux kernel Security vulnerabilities",
        "trust": 0.6,
        "url": "http://123.124.177.30/web/xxk/bdxqbyid.tag?id=177039"
      },
      {
        "title": "Red Hat: Important: kernel security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226983 - security advisory"
      },
      {
        "title": "Red Hat: Important: kernel-rt security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20226991 - security advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Virtualization 4.9.7 Images security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20228609 - security advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Container Platform 4.8.53 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227874 - security advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Container Platform 4.10.39 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227211 - security advisory"
      },
      {
        "title": "Red Hat: Important: OpenShift Container Platform 4.9.51 bug fix and security update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20227216 - security advisory"
      },
      {
        "title": "Ubuntu Security Notice: USN-5299-1: Linux kernel vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5299-1"
      },
      {
        "title": "Red Hat: Important: kernel security, bug fix, and enhancement update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20221988 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.6.5 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20224814 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: Migration Toolkit for Containers (MTC) 1.7.2 security and bug fix update",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225483 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.4.5 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225201 - security advisory"
      },
      {
        "title": "Red Hat: Important: Red Hat Advanced Cluster Management 2.5 security updates, images, and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20224956 - security advisory"
      },
      {
        "title": "Red Hat: Moderate: Red Hat Advanced Cluster Management 2.3.11 security updates and bug fixes",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=red_hat_security_advisories\u0026qid=rhsa-20225392 - security advisory"
      },
      {
        "title": "Ubuntu Security Notice: USN-5343-1: Linux kernel vulnerabilities",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=ubuntu_security_notice\u0026qid=usn-5343-1"
      },
      {
        "title": "Siemens Security Advisories: Siemens Security Advisory",
        "trust": 0.1,
        "url": "https://vulmon.com/vendoradvisory?qidtp=siemens_security_advisories\u0026qid=ec6577109e640dac19a6ddb978afe82d"
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/syrti/poc_to_review "
      },
      {
        "title": "",
        "trust": 0.1,
        "url": "https://github.com/trhacknon/pocingit "
      }
    ],
    "sources": [
      {
        "db": "VULMON",
        "id": "CVE-2021-45485"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202112-2265"
      }
    ]
  },
  "problemtype_data": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "problemtype": "CWE-327",
        "trust": 1.1
      },
      {
        "problemtype": "Use of incomplete or dangerous cryptographic algorithms (CWE-327) [NVD evaluation ]",
        "trust": 0.8
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-409116"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-45485"
      }
    ]
  },
  "references": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/references#",
      "data": {
        "@container": "@list"
      },
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": [
      {
        "trust": 2.6,
        "url": "https://arxiv.org/pdf/2112.09604.pdf"
      },
      {
        "trust": 2.6,
        "url": "https://www.oracle.com/security-alerts/cpujul2022.html"
      },
      {
        "trust": 1.8,
        "url": "https://security.netapp.com/advisory/ntap-20220121-0001/"
      },
      {
        "trust": 1.8,
        "url": "https://cdn.kernel.org/pub/linux/kernel/v5.x/changelog-5.13.3"
      },
      {
        "trust": 1.8,
        "url": "https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=62f20e068ccc50d6ab66fdb72ba90da2b9418c99"
      },
      {
        "trust": 1.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45485"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-45486"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/team/contact/"
      },
      {
        "trust": 0.7,
        "url": "https://access.redhat.com/security/cve/cve-2021-45485"
      },
      {
        "trust": 0.7,
        "url": "https://bugzilla.redhat.com/):"
      },
      {
        "trust": 0.7,
        "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3752"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-4157"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3744"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-27820"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3743"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-1011"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-4037"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-29154"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3759"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-4083"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-37159"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3772"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-0404"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3669"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3764"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2020-13974"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-20322"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-0322"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3773"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-4002"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-41864"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-43976"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-4197"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-0002"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-4203"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-0941"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-43389"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-44733"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-3612"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-42739"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-0286"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2022-0001"
      },
      {
        "trust": 0.6,
        "url": "https://access.redhat.com/security/cve/cve-2021-26401"
      },
      {
        "trust": 0.6,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.1225"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.2855"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169411/red-hat-security-advisory-2022-6991-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169719/red-hat-security-advisory-2022-7216-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2023.0061"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0121"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0380"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169997/red-hat-security-advisory-2022-8609-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169695/red-hat-security-advisory-2022-7211-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022062931"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5590"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/169941/red-hat-security-advisory-2022-7874-01.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6062"
      },
      {
        "trust": 0.6,
        "url": "https://vigilance.fr/vulnerability/linux-kernel-information-disclosure-via-ipv6-id-generation-37138"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.6111"
      },
      {
        "trust": 0.6,
        "url": "https://packetstormsecurity.com/files/166101/ubuntu-security-notice-usn-5299-1.html"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0615"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3136"
      },
      {
        "trust": 0.6,
        "url": "https://www.cybersecurity-help.cz/vdb/sb2022070643"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0205"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.3236"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.0215"
      },
      {
        "trust": 0.6,
        "url": "https://www.auscert.org.au/bulletins/esb-2022.5536"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669"
      },
      {
        "trust": 0.5,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-43056"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2020-4788"
      },
      {
        "trust": 0.5,
        "url": "https://access.redhat.com/security/cve/cve-2021-21781"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3752"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3772"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3759"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/updates/classification/#important"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3773"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3743"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3764"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-37159"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4002"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3744"
      },
      {
        "trust": 0.4,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-4189"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3634"
      },
      {
        "trust": 0.4,
        "url": "https://access.redhat.com/security/cve/cve-2021-3737"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4083"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4037"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4157"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-0235"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2021-41617"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2022-1271"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/updates/classification/#moderate"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2018-25032"
      },
      {
        "trust": 0.3,
        "url": "https://access.redhat.com/security/cve/cve-2020-19131"
      },
      {
        "trust": 0.3,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/articles/11258"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-42739"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/team/key/"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4203"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4197"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/8.6_release_notes/"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41864"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-0536"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-21803"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-24785"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-23806"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-29810"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2022-1154"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2020-35492"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492"
      },
      {
        "trust": 0.2,
        "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html"
      },
      {
        "trust": 0.2,
        "url": "https://access.redhat.com/security/cve/cve-2021-3807"
      },
      {
        "trust": 0.2,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3737"
      },
      {
        "trust": 0.1,
        "url": "https://cwe.mitre.org/data/definitions/327.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:6983"
      },
      {
        "trust": 0.1,
        "url": "https://ubuntu.com/security/notices/usn-5299-1"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1988"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4305"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-1708"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3696"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-38185"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28733"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0492"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-29526"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28736"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3697"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28734"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28737"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-25219"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3695"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-28735"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5392"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43389"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:1975"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43976"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:4814"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39293"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-39293"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3807"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26691"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:5483"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-23852"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3918"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43858"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-27191"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/release_notes/"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43565"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-43816"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-41190"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-24450"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:4956"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-0778"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2021-3918"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30322"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21626"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21626"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30322"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30321"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21628"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-2588"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhsa-2022:7874"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-39399"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-30321"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2588"
      },
      {
        "trust": 0.1,
        "url": "https://quay.io/repository/openshift-release-dev/ocp-release?tab=tags"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21619"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45486"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21123"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-26945"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/release_notes/ocp-4-8-release-notes.html"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21618"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21624"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21618"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/errata/rhba-2022:7873"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21125"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21628"
      },
      {
        "trust": 0.1,
        "url": "https://issues.jboss.org/):"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21123"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-21619"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21166"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-30323"
      },
      {
        "trust": 0.1,
        "url": "https://docs.openshift.com/container-platform/4.8/updating/updating-cluster-cli.html"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-26945"
      },
      {
        "trust": 0.1,
        "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21125"
      },
      {
        "trust": 0.1,
        "url": "https://access.redhat.com/security/cve/cve-2022-41974"
      }
    ],
    "sources": [
      {
        "db": "VULHUB",
        "id": "VHN-409116"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-45485"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "db": "PACKETSTORM",
        "id": "167097"
      },
      {
        "db": "PACKETSTORM",
        "id": "167622"
      },
      {
        "db": "PACKETSTORM",
        "id": "167072"
      },
      {
        "db": "PACKETSTORM",
        "id": "167330"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "169941"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202112-2265"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-45485"
      }
    ]
  },
  "sources": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "db": "VULHUB",
        "id": "VHN-409116"
      },
      {
        "db": "VULMON",
        "id": "CVE-2021-45485"
      },
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "db": "PACKETSTORM",
        "id": "167097"
      },
      {
        "db": "PACKETSTORM",
        "id": "167622"
      },
      {
        "db": "PACKETSTORM",
        "id": "167072"
      },
      {
        "db": "PACKETSTORM",
        "id": "167330"
      },
      {
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "db": "PACKETSTORM",
        "id": "169941"
      },
      {
        "db": "CNNVD",
        "id": "CNNVD-202112-2265"
      },
      {
        "db": "NVD",
        "id": "CVE-2021-45485"
      }
    ]
  },
  "sources_release_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2021-12-25T00:00:00",
        "db": "VULHUB",
        "id": "VHN-409116"
      },
      {
        "date": "2021-12-25T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-45485"
      },
      {
        "date": "2023-01-18T00:00:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "date": "2022-05-11T16:54:36",
        "db": "PACKETSTORM",
        "id": "167097"
      },
      {
        "date": "2022-06-29T20:27:02",
        "db": "PACKETSTORM",
        "id": "167622"
      },
      {
        "date": "2022-05-11T16:37:26",
        "db": "PACKETSTORM",
        "id": "167072"
      },
      {
        "date": "2022-05-31T17:24:53",
        "db": "PACKETSTORM",
        "id": "167330"
      },
      {
        "date": "2022-07-01T15:04:32",
        "db": "PACKETSTORM",
        "id": "167679"
      },
      {
        "date": "2022-06-09T16:11:52",
        "db": "PACKETSTORM",
        "id": "167459"
      },
      {
        "date": "2022-11-18T14:28:39",
        "db": "PACKETSTORM",
        "id": "169941"
      },
      {
        "date": "2021-12-25T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202112-2265"
      },
      {
        "date": "2021-12-25T02:15:06.667000",
        "db": "NVD",
        "id": "CVE-2021-45485"
      }
    ]
  },
  "sources_update_date": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#",
      "data": {
        "@container": "@list"
      }
    },
    "data": [
      {
        "date": "2023-02-24T00:00:00",
        "db": "VULHUB",
        "id": "VHN-409116"
      },
      {
        "date": "2023-02-24T00:00:00",
        "db": "VULMON",
        "id": "CVE-2021-45485"
      },
      {
        "date": "2023-01-18T05:28:00",
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      },
      {
        "date": "2023-01-04T00:00:00",
        "db": "CNNVD",
        "id": "CNNVD-202112-2265"
      },
      {
        "date": "2023-02-24T15:07:31.653000",
        "db": "NVD",
        "id": "CVE-2021-45485"
      }
    ]
  },
  "threat_type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/threat_type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "remote",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202112-2265"
      }
    ],
    "trust": 0.6
  },
  "title": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/title#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "Linux\u00a0Kernel\u00a0 Vulnerability in using cryptographic algorithms in",
    "sources": [
      {
        "db": "JVNDB",
        "id": "JVNDB-2021-017434"
      }
    ],
    "trust": 0.8
  },
  "type": {
    "@context": {
      "@vocab": "https://www.variotdbs.pl/ref/type#",
      "sources": {
        "@container": "@list",
        "@context": {
          "@vocab": "https://www.variotdbs.pl/ref/sources#"
        }
      }
    },
    "data": "encryption problem",
    "sources": [
      {
        "db": "CNNVD",
        "id": "CNNVD-202112-2265"
      }
    ],
    "trust": 0.6
  }
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading...

Loading...

Loading...

Sightings

Author Source Type Date

Nomenclature

  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.