var-202201-0496
Vulnerability from variot
An unprivileged write to the file handler flaw in the Linux kernel's control groups and namespaces subsystem was found in the way users have access to some less privileged process that are controlled by cgroups and have higher privileged parent process. It is actually both for cgroup2 and cgroup1 versions of control groups. A local user could use this flaw to crash the system or escalate their privileges on the system. Linux Kernel There is an authentication vulnerability in.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Attackers can use this vulnerability to bypass the restrictions of the Linux kernel through Cgroup Fd Writing to elevate their privileges. ========================================================================== Ubuntu Security Notice USN-5368-1 April 06, 2022
linux-azure-5.13, linux-oracle-5.13 vulnerabilities
A security issue affects these releases of Ubuntu and its derivatives:
- Ubuntu 20.04 LTS
Summary:
Several security issues were fixed in the Linux kernel.
Software Description: - linux-azure-5.13: Linux kernel for Microsoft Azure cloud systems - linux-oracle-5.13: Linux kernel for Oracle Cloud systems
Details:
It was discovered that the BPF verifier in the Linux kernel did not properly restrict pointer types in certain situations. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2022-1055)
Yiqi Sun and Kevin Wang discovered that the cgroups implementation in the Linux kernel did not properly restrict access to the cgroups v1 release_agent feature. (CVE-2022-0492)
J\xfcrgen Gro\xdf discovered that the Xen subsystem within the Linux kernel did not adequately limit the number of events driver domains (unprivileged PV backends) could send to other guest VMs. An attacker in a driver domain could use this to cause a denial of service in other guest VMs. (CVE-2021-28711, CVE-2021-28712, CVE-2021-28713)
J\xfcrgen Gro\xdf discovered that the Xen network backend driver in the Linux kernel did not adequately limit the amount of queued packets when a guest did not process them. An attacker in a guest VM can use this to cause a denial of service (excessive kernel memory consumption) in the network backend domain. (CVE-2021-28714, CVE-2021-28715)
Szymon Heidrich discovered that the USB Gadget subsystem in the Linux kernel did not properly restrict the size of control requests for certain gadget types, leading to possible out of bounds reads or writes. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-39698)
It was discovered that the simulated networking device driver for the Linux kernel did not properly initialize memory in certain situations. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2021-4135)
Eric Biederman discovered that the cgroup process migration implementation in the Linux kernel did not perform permission checks correctly in some situations. (CVE-2021-4197)
Brendan Dolan-Gavitt discovered that the aQuantia AQtion Ethernet device driver in the Linux kernel did not properly validate meta-data coming from the device. A local attacker who can control an emulated device can use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2021-43975)
It was discovered that the ARM Trusted Execution Environment (TEE) subsystem in the Linux kernel contained a race condition leading to a use- after-free vulnerability. A local attacker could use this to cause a denial of service or possibly execute arbitrary code. (CVE-2021-44733)
It was discovered that the Phone Network protocol (PhoNet) implementation in the Linux kernel did not properly perform reference counting in some error conditions. A local attacker could possibly use this to cause a denial of service (memory exhaustion). (CVE-2021-45095)
It was discovered that the eBPF verifier in the Linux kernel did not properly perform bounds checking on mov32 operations. A local attacker could use this to expose sensitive information (kernel pointer addresses). (CVE-2021-45402)
It was discovered that the Reliable Datagram Sockets (RDS) protocol implementation in the Linux kernel did not properly deallocate memory in some error conditions. A local attacker could possibly use this to cause a denial of service (memory exhaustion). (CVE-2021-45480)
It was discovered that the BPF subsystem in the Linux kernel did not properly track pointer types on atomic fetch operations in some situations. A local attacker could use this to expose sensitive information (kernel pointer addresses). (CVE-2022-0264)
It was discovered that the TIPC Protocol implementation in the Linux kernel did not properly initialize memory in some situations. A local attacker could use this to expose sensitive information (kernel memory). (CVE-2022-0382)
Samuel Page discovered that the Transparent Inter-Process Communication (TIPC) protocol implementation in the Linux kernel contained a stack-based buffer overflow. A remote attacker could use this to cause a denial of service (system crash) for systems that have a TIPC bearer configured. (CVE-2022-0435)
It was discovered that the KVM implementation for s390 systems in the Linux kernel did not properly prevent memory operations on PVM guests that were in non-protected mode. A local attacker could use this to obtain unauthorized memory write access. (CVE-2022-0516)
It was discovered that the ICMPv6 implementation in the Linux kernel did not properly deallocate memory in certain situations. A remote attacker could possibly use this to cause a denial of service (memory exhaustion). (CVE-2022-0742)
It was discovered that the IPsec implementation in the Linux kernel did not properly allocate enough memory when performing ESP transformations, leading to a heap-based buffer overflow. A local attacker could use this to cause a denial of service (system crash) or possibly execute arbitrary code. (CVE-2022-27666)
Update instructions:
The problem can be corrected by updating your system to the following package versions:
Ubuntu 20.04 LTS: linux-image-5.13.0-1021-azure 5.13.0-1021.24~20.04.1 linux-image-5.13.0-1025-oracle 5.13.0-1025.30~20.04.1 linux-image-azure 5.13.0.1021.24~20.04.10 linux-image-oracle 5.13.0.1025.30~20.04.1
After a standard system update you need to reboot your computer to make all the necessary changes.
ATTENTION: Due to an unavoidable ABI change the kernel updates have been given a new version number, which requires you to recompile and reinstall all third party kernel modules you might have installed. Unless you manually uninstalled the standard kernel metapackages (e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual, linux-powerpc), a standard system upgrade will automatically perform this as well.
References: https://ubuntu.com/security/notices/USN-5368-1 CVE-2021-28711, CVE-2021-28712, CVE-2021-28713, CVE-2021-28714, CVE-2021-28715, CVE-2021-39685, CVE-2021-39698, CVE-2021-4135, CVE-2021-4197, CVE-2021-43975, CVE-2021-44733, CVE-2021-45095, CVE-2021-45402, CVE-2021-45480, CVE-2022-0264, CVE-2022-0382, CVE-2022-0435, CVE-2022-0492, CVE-2022-0516, CVE-2022-0742, CVE-2022-1055, CVE-2022-23222, CVE-2022-27666
Package Information: https://launchpad.net/ubuntu/+source/linux-azure-5.13/5.13.0-1021.24~20.04.1 https://launchpad.net/ubuntu/+source/linux-oracle-5.13/5.13.0-1025.30~20.04.1
. Description:
Red Hat Advanced Cluster Management for Kubernetes 2.4.5 images
Red Hat Advanced Cluster Management for Kubernetes provides the capabilities to address common challenges that administrators and site reliability engineers face as they work across a range of public and private cloud environments. Clusters and applications are all visible and managed from a single console—with security policy built in. See the following Release Notes documentation, which will be updated shortly for this release, for additional details about this release:
https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/
Security fixes:
-
golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)
-
nconf: Prototype pollution in memory store (CVE-2022-21803)
-
golang: crypto/elliptic IsOnCurve returns true for invalid field elements (CVE-2022-23806)
-
nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account (CVE-2022-24450)
-
Moment.js: Path traversal in moment.locale (CVE-2022-24785)
-
dset: Prototype Pollution in dset (CVE-2022-25645)
-
golang: syscall: faccessat checks wrong group (CVE-2022-29526)
-
go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses (CVE-2022-29810)
Bug fixes:
-
Trying to create a new cluster on vSphere and no feedback, stuck in "creating" (BZ# 1937078)
-
Wrong message is displayed when GRC fails to connect to an Ansible Tower (BZ# 2051752)
-
multicluster_operators_hub_subscription issues due to /tmp usage (BZ# 2052702)
-
Create Cluster, Worker Pool 2 zones do not load options that relate to the selected Region field (BZ# 2054954)
-
Changing the multiclusterhub name other than the default name keeps the version in the web console loading (BZ# 2059822)
-
search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade (BZ# 2065318)
-
Uninstall pod crashed when destroying Azure Gov cluster in ACM (BZ# 2073562)
-
Deprovisioned clusters not filtered out by discovery controller (BZ# 2075594)
-
When deleting a secret for a Helm application, duplicate errors show up in topology (BZ# 2075675)
-
Changing existing placement rules does not change YAML file Regression (BZ# 2075724)
-
Editing Helm Argo Applications does not Prune Old Resources (BZ# 2079906)
-
Failed to delete the requested resource [404] error appears after subscription is deleted and its placement rule is used in the second subscription (BZ# 2080713)
-
Typo in the logs when Deployable is updated in the subscription namespace (BZ# 2080960)
-
After Argo App Sets are created in an Upgraded Environment, the Clusters column does not indicate the clusters (BZ# 2080716)
-
RHACM 2.4.5 images (BZ# 2081438)
-
Performance issue to get secret in claim-controller (BZ# 2081908)
-
Failed to provision openshift 4.10 on bare metal (BZ# 2094109)
-
Bugs fixed (https://bugzilla.redhat.com/):
1937078 - Trying to create a new cluster on vSphere and no feedback, stuck in "creating" 2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic 2051752 - Wrong message is displayed when GRC fails to connect to an ansible tower 2052573 - CVE-2022-24450 nats-server: misusing the "dynamically provisioned sandbox accounts" feature authenticated user can obtain the privileges of the System account 2052702 - multicluster_operators_hub_subscription issues due to /tmp usage 2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements 2054954 - Create Cluster, Worker Pool 2 zones do not load options that relate to the selected Region field 2059822 - Changing the multiclusterhub name other than the default name keeps the version in the web console loading. 2065318 - search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade 2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale 2073562 - Uninstall pod crashed when destroying Azure Gov cluster in ACM 2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store 2075594 - Deprovisioned clusters not filtered out by discovery controller 2075675 - When deleting a secret for a Helm application, duplicate errors show up in topology 2075724 - Changing existing placement rules does not change YAML file 2079906 - Editing Helm Argo Applications does not Prune Old Resources 2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses 2080713 - Failed to delete the requested resource [404] error appears after subscription is deleted and it's placement rule is used in the second subscription [Upgrade] 2080716 - After Argo App Sets are created in an Upgraded Environment, the Clusters column does not indicate the clusters 2080847 - CVE-2022-25645 dset: Prototype Pollution in dset 2080960 - Typo in the logs when Deployable is updated in the subscription namespace 2081438 - RHACM 2.4.5 images 2081908 - Performance issue to get secret in claim-controller 2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group 2094109 - Failed to provision openshift 4.10 on bare metal
- -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
===================================================================== Red Hat Security Advisory
Synopsis: Important: kernel security and bug fix update Advisory ID: RHSA-2022:5626-01 Product: Red Hat Enterprise Linux Advisory URL: https://access.redhat.com/errata/RHSA-2022:5626 Issue date: 2022-07-19 CVE Names: CVE-2020-29368 CVE-2021-4197 CVE-2021-4203 CVE-2022-1012 CVE-2022-1729 CVE-2022-32250 =====================================================================
- Summary:
An update for kernel is now available for Red Hat Enterprise Linux 8.4 Extended Update Support.
Red Hat Product Security has rated this update as having a security impact of Important. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.
- Relevant releases/architectures:
Red Hat CodeReady Linux Builder EUS (v. 8.4) - aarch64, ppc64le, x86_64 Red Hat Enterprise Linux BaseOS EUS (v.8.4) - aarch64, noarch, ppc64le, s390x, x86_64
Security Fix(es):
-
kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak (CVE-2022-1012)
-
kernel: race condition in perf_event_open leads to privilege escalation (CVE-2022-1729)
-
kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root (CVE-2022-32250)
-
kernel: cgroup: Use open-time creds and namespace for migration perm checks (CVE-2021-4197)
-
kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses (CVE-2021-4203)
-
kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check (CVE-2020-29368)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
Bug Fix(es):
-
Failed to reboot after crash trigger (BZ#2060747)
-
conntrack entries linger around after test (BZ#2066357)
-
Enable nested virtualization (BZ#2079070)
-
slub corruption during LPM of hnv interface (BZ#2081251)
-
sleeping function called from invalid context at kernel/locking/spinlock_rt.c:35 (BZ#2082091)
-
Backport request of "genirq: use rcu in kstat_irqs_usr()" (BZ#2083309)
-
ethtool -L may cause system to hang (BZ#2083323)
-
For isolated CPUs (with nohz_full enabled for isolated CPUs) CPU utilization statistics are not getting reflected continuously (BZ#2084139)
-
Affinity broken due to vector space exhaustion (BZ#2084647)
-
kernel memory leak while freeing nested actions (BZ#2086597)
-
sync rhel-8.6 with upstream 5.13 through 5.16 fixes and improvements (BZ#2088037)
-
Kernel panic possibly when cleaning namespace on pod deletion (BZ#2089539)
-
Softirq hrtimers are being placed on the per-CPU softirq clocks on isolcpu’s. (BZ#2090485)
-
fix missed wake-ups in rq_qos_throttle try two (BZ#2092076)
-
NFS4 client experiencing IO outages while sending duplicate SYNs and erroneous RSTs during connection reestablishment (BZ#2094334)
-
using __this_cpu_read() in preemptible [00000000] code: kworker/u66:1/937154 (BZ#2095775)
-
Need some changes in RHEL8.x kernels. (BZ#2096932)
-
Solution:
For details on how to apply this update, which includes the changes described in this advisory, refer to:
https://access.redhat.com/articles/11258
The system must be rebooted for this update to take effect.
- Bugs fixed (https://bugzilla.redhat.com/):
1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check 2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks 2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses 2064604 - CVE-2022-1012 kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak 2086753 - CVE-2022-1729 kernel: race condition in perf_event_open leads to privilege escalation 2092427 - CVE-2022-32250 kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root
- Package List:
Red Hat Enterprise Linux BaseOS EUS (v.8.4):
Source: kernel-4.18.0-305.57.1.el8_4.src.rpm
aarch64: bpftool-4.18.0-305.57.1.el8_4.aarch64.rpm bpftool-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-core-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-cross-headers-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-core-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-devel-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-modules-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-modules-extra-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-devel-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-headers-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-modules-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-modules-extra-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-libs-4.18.0-305.57.1.el8_4.aarch64.rpm perf-4.18.0-305.57.1.el8_4.aarch64.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm python3-perf-4.18.0-305.57.1.el8_4.aarch64.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm
noarch: kernel-abi-stablelists-4.18.0-305.57.1.el8_4.noarch.rpm kernel-doc-4.18.0-305.57.1.el8_4.noarch.rpm
ppc64le: bpftool-4.18.0-305.57.1.el8_4.ppc64le.rpm bpftool-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-core-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-cross-headers-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-core-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-modules-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-modules-extra-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-headers-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-modules-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-modules-extra-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-libs-4.18.0-305.57.1.el8_4.ppc64le.rpm perf-4.18.0-305.57.1.el8_4.ppc64le.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm python3-perf-4.18.0-305.57.1.el8_4.ppc64le.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm
s390x: bpftool-4.18.0-305.57.1.el8_4.s390x.rpm bpftool-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-4.18.0-305.57.1.el8_4.s390x.rpm kernel-core-4.18.0-305.57.1.el8_4.s390x.rpm kernel-cross-headers-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-core-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-devel-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-modules-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debug-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-debuginfo-common-s390x-4.18.0-305.57.1.el8_4.s390x.rpm kernel-devel-4.18.0-305.57.1.el8_4.s390x.rpm kernel-headers-4.18.0-305.57.1.el8_4.s390x.rpm kernel-modules-4.18.0-305.57.1.el8_4.s390x.rpm kernel-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm kernel-tools-4.18.0-305.57.1.el8_4.s390x.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-core-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-devel-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-modules-4.18.0-305.57.1.el8_4.s390x.rpm kernel-zfcpdump-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm perf-4.18.0-305.57.1.el8_4.s390x.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm python3-perf-4.18.0-305.57.1.el8_4.s390x.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm
x86_64: bpftool-4.18.0-305.57.1.el8_4.x86_64.rpm bpftool-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-core-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-cross-headers-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-core-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-devel-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-modules-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-modules-extra-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-devel-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-headers-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-modules-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-modules-extra-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-libs-4.18.0-305.57.1.el8_4.x86_64.rpm perf-4.18.0-305.57.1.el8_4.x86_64.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm python3-perf-4.18.0-305.57.1.el8_4.x86_64.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm
Red Hat CodeReady Linux Builder EUS (v. 8.4):
aarch64: bpftool-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-debuginfo-common-aarch64-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm kernel-tools-libs-devel-4.18.0-305.57.1.el8_4.aarch64.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm
ppc64le: bpftool-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-debuginfo-common-ppc64le-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm kernel-tools-libs-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm
x86_64: bpftool-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debug-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-debuginfo-common-x86_64-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm kernel-tools-libs-devel-4.18.0-305.57.1.el8_4.x86_64.rpm perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm python3-perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm
These packages are GPG signed by Red Hat for security. Our key and details on how to verify the signature are available from https://access.redhat.com/security/team/key/
- Contact:
The Red Hat security contact is secalert@redhat.com. More contact details at https://access.redhat.com/security/team/contact/
Copyright 2022 Red Hat, Inc. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iQIVAwUBYuFkSdzjgjWX9erEAQhDCxAAknsy8K3eg1J603gMndUGWfI/Fs5VzIaH lxGavTw8H57lXRWbQYqJoRKk42uHAH2iCicovyvowJ5SdfnChtAVbG1A1wjJLmJJ 0YDKoeMn3s1jjThivm5rWGQVdImqLw+CxVvb3Pywv6ZswTI5r4ZB4FEXW8GIR1w2 1FeHTcwUgNLzeBLdVem1T50lWERG0j0ZGUmv9mu4QMDeWXoSoPcHKWnsmLgDvQif dVky3UsFoCJ783WJOIctmY97kOffqIDvZdbPwajAyTByspumtcwt6N7wMU6VfI+u B6bRGQgLbElY6IniLUsV7MG8GbbffZvPFNN/n6LdnnFgEt1eDlo6LkZCyPaMbEfx 2dMxJtcAiXmydMs5QXvNJ3y2UR2fp/iHF8euAnSN3eKTLAxDQwo3c4KvNUKAfFcF OAjbyLTilLhiPHRARG4aEWCEUSmfzO3rulNhRcIEWtNIira3/QMFG9qUjNAMvzU1 M4tMSPkH35gx49p2a6arZceUGDXiRwvrP142GzpAgRWt/GrydjAsRiG4pJM2H5TW nB5q7OuwEvch8+o8gJril5uOpm6eI1lylv9wTXbwjpzQqL5k2JcgByRWx8wLqYXy wXsBm+JZL9ztSadqoVsFWSqC0yeRkuF185F4gI7+7azjpeQhHtJEix3bgqRhIzK4 07JERnC1IRg= =y1sh -----END PGP SIGNATURE----- -- RHSA-announce mailing list RHSA-announce@redhat.com https://listman.redhat.com/mailman/listinfo/rhsa-announce . Summary:
The Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:
The Migration Toolkit for Containers (MTC) enables you to migrate Kubernetes resources, persistent volume data, and internal container images between OpenShift Container Platform clusters, using the MTC web console or the Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):
2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes 2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console 2040693 - ?Replication repository? wizard has no validation for name length 2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters 2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor 2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com? 2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak 2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings 2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace 2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. 2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade 2061335 - [MTC UI] ?Update cluster? button is not getting disabled 2062266 - MTC UI does not display logs properly [OADP-BL] 2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend 2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x 2076593 - Velero pod log missing from UI drop down 2076599 - Velero pod log missing from downloaded logs folder [OADP-BL] 2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan 2079252 - [MTC] Rsync options logs not visible in log-reader pod 2082221 - Don't allow Storage class conversion migration if source cluster has only one storage class defined [UI] 2082225 - non-numeric user when launching stage pods [OADP-BL] 2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments 2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods 2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels 2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL] 2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts 2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL] 2096939 - Fix legacy operator.yml inconsistencies and errors 2100486 - [MTC UI] Target storage class field is not getting respected when clusters don't have replication repo configured. Description:
Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments.
This advisory contains the container images for Red Hat OpenShift Container Platform 4.9.45. See the following advisory for the RPM packages for this release:
https://access.redhat.com/errata/RHBA-2022:5878
Space precludes documenting all of the container images in this advisory. See the following Release Notes documentation, which will be updated shortly for this release, for details about these changes:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
Security Fix(es):
- openshift: oauth-serving-cert configmap contains cluster certificate private key (CVE-2022-2403)
For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.
You may download the oc tool and use it to inspect release image metadata as follows:
(For x86_64 architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.45-x86_64
The image digest is sha256:8ab373599e8a010dffb9c7ed45e01c00cb06a7857fe21de102d978be4738b2ec
(For s390x architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.45-s390x
The image digest is sha256:1dde8a7134081c82012a812e014daca4cba1095630e6d0c74b51da141d472984
(For ppc64le architecture)
$ oc adm release info quay.io/openshift-release-dev/ocp-release:4.9.45-ppc64le
The image digest is sha256:ec1fac628bec05eb6425c2ae9dcd3fca120cd1a8678155350bb4c65813cfc30e
All OpenShift Container Platform 4.9 users are advised to upgrade to these updated packages and images when they are available in the appropriate release channel. To check for available updates, use the OpenShift Console or the CLI oc command. Instructions for upgrading a cluster are available at https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html
- Solution:
For OpenShift Container Platform 4.9 see the following documentation, which will be updated shortly for this release, for important instructions on how to upgrade your cluster and fully apply this asynchronous errata update:
https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html
Details on how to access this content are available at https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html
- Bugs fixed (https://bugzilla.redhat.com/):
2009024 - Unable to complete cluster destruction, some ports are left over 2055494 - console operator should report Upgradeable False when SAN-less certs are used 2083554 - post 1.23 rebase: regression in service-load balancer reliability 2087021 - configure-ovs.sh fails, blocking new RHEL node from being scaled up on cluster without manual reboot 2088539 - Openshift route URLs starting with double slashes stopped working after update to 4.8.33 - curl version problems 2091806 - Cluster upgrade stuck due to "resource deletions in progress" 2095320 - [4.9] Bootimage bump tracker 2097157 - [4.9z] During ovnkube-node restart all host conntrack entries are flushed, leading to traffic disruption 2100786 - [OCP 4.9] Ironic cannot match "wwn" rootDeviceHint for a multipath device 2101664 - disabling ipv6 router advertisements using "all" does not disable it on secondary interfaces 2101959 - CVE-2022-2403 openshift: oauth-serving-cert configmap contains cluster certificate private key 2103982 - [4.9] AWS EBS CSI driver stuck removing EBS volumes - GetDeviceMountRefs check failed 2105277 - NetworkPolicies: ovnkube-master pods crashing due to panic: "invalid memory address or nil pointer dereference" 2105453 - Node reboot causes duplicate persistent volumes 2105654 - egressIP panics with nil pointer dereference 2105663 - APIRequestCount does not identify some APIs removed in 4.9 2106655 - Kubelet slowly leaking memory and pods eventually unable to start 2108538 - [4.9.z backport] br-ex not created due to default bond interface having a different mac address than expected 2108619 - ClusterVersion history pruner does not always retain initial completed update entry
5
Show details on source website{ "@context": { "@vocab": "https://www.variotdbs.pl/ref/VARIoTentry#", "affected_products": { "@id": "https://www.variotdbs.pl/ref/affected_products" }, "configurations": { "@id": "https://www.variotdbs.pl/ref/configurations" }, "credits": { "@id": "https://www.variotdbs.pl/ref/credits" }, "cvss": { "@id": "https://www.variotdbs.pl/ref/cvss/" }, "description": { "@id": "https://www.variotdbs.pl/ref/description/" }, "exploit_availability": { "@id": "https://www.variotdbs.pl/ref/exploit_availability/" }, "external_ids": { "@id": "https://www.variotdbs.pl/ref/external_ids/" }, "iot": { "@id": "https://www.variotdbs.pl/ref/iot/" }, "iot_taxonomy": { "@id": "https://www.variotdbs.pl/ref/iot_taxonomy/" }, "patch": { "@id": "https://www.variotdbs.pl/ref/patch/" }, "problemtype_data": { "@id": "https://www.variotdbs.pl/ref/problemtype_data/" }, "references": { "@id": "https://www.variotdbs.pl/ref/references/" }, "sources": { "@id": "https://www.variotdbs.pl/ref/sources/" }, "sources_release_date": { "@id": "https://www.variotdbs.pl/ref/sources_release_date/" }, "sources_update_date": { "@id": "https://www.variotdbs.pl/ref/sources_update_date/" }, "threat_type": { "@id": "https://www.variotdbs.pl/ref/threat_type/" }, "title": { "@id": "https://www.variotdbs.pl/ref/title/" }, "type": { "@id": "https://www.variotdbs.pl/ref/type/" } }, "@id": "https://www.variotdbs.pl/vuln/VAR-202201-0496", "affected_products": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/affected_products#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "model": "h700s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "5.15.14" }, { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "4.14.276" }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "5.5" }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "5.11" }, { "model": "linux", "scope": "eq", "trust": 1.0, "vendor": "debian", "version": "10.0" }, { "model": "h300s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "4.15" }, { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "4.19.238" }, { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "5.4.189" }, { "model": "kernel", "scope": "lt", "trust": 1.0, "vendor": "linux", "version": "5.10.111" }, { "model": "brocade fabric operating system", "scope": "eq", "trust": 1.0, "vendor": "broadcom", "version": null }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "4.2" }, { "model": "communications cloud native core binding support function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.2.0" }, { "model": "h410s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "communications cloud native core binding support function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.1.3" }, { "model": "h500s", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "kernel", "scope": "gte", "trust": 1.0, "vendor": "linux", "version": "4.20" }, { "model": "h410c", "scope": "eq", "trust": 1.0, "vendor": "netapp", "version": null }, { "model": "communications cloud native core binding support function", "scope": "eq", "trust": 1.0, "vendor": "oracle", "version": "22.1.1" }, { "model": "h300s", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "h410s", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "h700s", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "h410c", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "h500s", "scope": null, "trust": 0.8, "vendor": "netapp", "version": null }, { "model": "kernel", "scope": null, "trust": 0.8, "vendor": "linux", "version": null }, { "model": "oracle communications cloud native core binding support function", "scope": null, "trust": 0.8, "vendor": "\u30aa\u30e9\u30af\u30eb", "version": null }, { "model": "brocade fabric os", "scope": null, "trust": 0.8, "vendor": "broadcom", "version": null } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-019487" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "configurations": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/configurations#", "children": { "@container": "@list" }, "cpe_match": { "@container": "@list" }, "data": { "@container": "@list" }, "nodes": { "@container": "@list" } }, "data": [ { "CVE_data_version": "4.0", "nodes": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "5.4.189", "versionStartIncluding": "4.20", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.19.238", "versionStartIncluding": "4.15", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "4.14.276", "versionStartIncluding": "4.2", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "5.10.111", "versionStartIncluding": "5.5", "vulnerable": true }, { "cpe23Uri": "cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*", "cpe_name": [], "versionEndExcluding": "5.15.14", "versionStartIncluding": "5.11", "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:debian:debian_linux:10.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.1.3:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.1.1:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true }, { "cpe23Uri": "cpe:2.3:a:oracle:communications_cloud_native_core_binding_support_function:22.2.0:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:broadcom:brocade_fabric_operating_system_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h300s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h300s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h500s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h500s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h700s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h700s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410s_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410s:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" }, { "children": [ { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:o:netapp:h410c_firmware:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": true } ], "operator": "OR" }, { "children": [], "cpe_match": [ { "cpe23Uri": "cpe:2.3:h:netapp:h410c:-:*:*:*:*:*:*:*", "cpe_name": [], "vulnerable": false } ], "operator": "OR" } ], "cpe_match": [], "operator": "AND" } ] } ], "sources": [ { "db": "NVD", "id": "CVE-2021-4197" } ] }, "credits": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/credits#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Red Hat", "sources": [ { "db": "PACKETSTORM", "id": "167602" }, { "db": "PACKETSTORM", "id": "167852" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "168019" } ], "trust": 0.5 }, "cve": "CVE-2021-4197", "cvss": { "@context": { "cvssV2": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV2#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV2" }, "cvssV3": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/cvss/cvssV3#" }, "@id": "https://www.variotdbs.pl/ref/cvss/cvssV3/" }, "severity": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/cvss/severity#" }, "@id": "https://www.variotdbs.pl/ref/cvss/severity" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" }, "@id": "https://www.variotdbs.pl/ref/sources" } }, "data": [ { "cvssV2": [ { "acInsufInfo": false, "accessComplexity": "LOW", "accessVector": "LOCAL", "authentication": "NONE", "author": "NVD", "availabilityImpact": "COMPLETE", "baseScore": 7.2, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 3.9, "impactScore": 10.0, "integrityImpact": "COMPLETE", "obtainAllPrivilege": false, "obtainOtherPrivilege": false, "obtainUserPrivilege": false, "severity": "HIGH", "trust": 1.0, "userInteractionRequired": false, "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" }, { "acInsufInfo": null, "accessComplexity": "Low", "accessVector": "Local", "authentication": "None", "author": "NVD", "availabilityImpact": "Complete", "baseScore": 7.2, "confidentialityImpact": "Complete", "exploitabilityScore": null, "id": "CVE-2021-4197", "impactScore": null, "integrityImpact": "Complete", "obtainAllPrivilege": null, "obtainOtherPrivilege": null, "obtainUserPrivilege": null, "severity": "High", "trust": 0.8, "userInteractionRequired": null, "vectorString": "AV:L/AC:L/Au:N/C:C/I:C/A:C", "version": "2.0" }, { "accessComplexity": "LOW", "accessVector": "LOCAL", "authentication": "NONE", "author": "VULHUB", "availabilityImpact": "COMPLETE", "baseScore": 7.2, "confidentialityImpact": "COMPLETE", "exploitabilityScore": 3.9, "id": "VHN-410862", "impactScore": 10.0, "integrityImpact": "COMPLETE", "severity": "HIGH", "trust": 0.1, "vectorString": "AV:L/AC:L/AU:N/C:C/I:C/A:C", "version": "2.0" } ], "cvssV3": [ { "attackComplexity": "LOW", "attackVector": "LOCAL", "author": "NVD", "availabilityImpact": "HIGH", "baseScore": 7.8, "baseSeverity": "HIGH", "confidentialityImpact": "HIGH", "exploitabilityScore": 1.8, "impactScore": 5.9, "integrityImpact": "HIGH", "privilegesRequired": "LOW", "scope": "UNCHANGED", "trust": 1.0, "userInteraction": "NONE", "vectorString": "CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.1" }, { "attackComplexity": "Low", "attackVector": "Local", "author": "NVD", "availabilityImpact": "High", "baseScore": 7.8, "baseSeverity": "High", "confidentialityImpact": "High", "exploitabilityScore": null, "id": "CVE-2021-4197", "impactScore": null, "integrityImpact": "High", "privilegesRequired": "Low", "scope": "Unchanged", "trust": 0.8, "userInteraction": "None", "vectorString": "CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "version": "3.0" } ], "severity": [ { "author": "NVD", "id": "CVE-2021-4197", "trust": 1.8, "value": "HIGH" }, { "author": "VULHUB", "id": "VHN-410862", "trust": 0.1, "value": "HIGH" } ] } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "JVNDB", "id": "JVNDB-2021-019487" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "description": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/description#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "An unprivileged write to the file handler flaw in the Linux kernel\u0027s control groups and namespaces subsystem was found in the way users have access to some less privileged process that are controlled by cgroups and have higher privileged parent process. It is actually both for cgroup2 and cgroup1 versions of control groups. A local user could use this flaw to crash the system or escalate their privileges on the system. Linux Kernel There is an authentication vulnerability in.Information is obtained, information is tampered with, and service operation is interrupted. (DoS) It may be in a state. Attackers can use this vulnerability to bypass the restrictions of the Linux kernel through Cgroup Fd Writing to elevate their privileges. ==========================================================================\nUbuntu Security Notice USN-5368-1\nApril 06, 2022\n\nlinux-azure-5.13, linux-oracle-5.13 vulnerabilities\n==========================================================================\n\nA security issue affects these releases of Ubuntu and its derivatives:\n\n- Ubuntu 20.04 LTS\n\nSummary:\n\nSeveral security issues were fixed in the Linux kernel. \n\nSoftware Description:\n- linux-azure-5.13: Linux kernel for Microsoft Azure cloud systems\n- linux-oracle-5.13: Linux kernel for Oracle Cloud systems\n\nDetails:\n\nIt was discovered that the BPF verifier in the Linux kernel did not\nproperly restrict pointer types in certain situations. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. A local attacker\ncould use this to cause a denial of service (system crash) or possibly\nexecute arbitrary code. (CVE-2022-1055)\n\nYiqi Sun and Kevin Wang discovered that the cgroups implementation in the\nLinux kernel did not properly restrict access to the cgroups v1\nrelease_agent feature. (CVE-2022-0492)\n\nJ\\xfcrgen Gro\\xdf discovered that the Xen subsystem within the Linux kernel did\nnot adequately limit the number of events driver domains (unprivileged PV\nbackends) could send to other guest VMs. An attacker in a driver domain\ncould use this to cause a denial of service in other guest VMs. \n(CVE-2021-28711, CVE-2021-28712, CVE-2021-28713)\n\nJ\\xfcrgen Gro\\xdf discovered that the Xen network backend driver in the Linux\nkernel did not adequately limit the amount of queued packets when a guest\ndid not process them. An attacker in a guest VM can use this to cause a\ndenial of service (excessive kernel memory consumption) in the network\nbackend domain. (CVE-2021-28714, CVE-2021-28715)\n\nSzymon Heidrich discovered that the USB Gadget subsystem in the Linux\nkernel did not properly restrict the size of control requests for certain\ngadget types, leading to possible out of bounds reads or writes. A local\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code. A local\nattacker could use this to cause a denial of service (system crash) or\npossibly execute arbitrary code. (CVE-2021-39698)\n\nIt was discovered that the simulated networking device driver for the Linux\nkernel did not properly initialize memory in certain situations. A local\nattacker could use this to expose sensitive information (kernel memory). \n(CVE-2021-4135)\n\nEric Biederman discovered that the cgroup process migration implementation\nin the Linux kernel did not perform permission checks correctly in some\nsituations. (CVE-2021-4197)\n\nBrendan Dolan-Gavitt discovered that the aQuantia AQtion Ethernet device\ndriver in the Linux kernel did not properly validate meta-data coming from\nthe device. A local attacker who can control an emulated device can use\nthis to cause a denial of service (system crash) or possibly execute\narbitrary code. (CVE-2021-43975)\n\nIt was discovered that the ARM Trusted Execution Environment (TEE)\nsubsystem in the Linux kernel contained a race condition leading to a use-\nafter-free vulnerability. A local attacker could use this to cause a denial\nof service or possibly execute arbitrary code. (CVE-2021-44733)\n\nIt was discovered that the Phone Network protocol (PhoNet) implementation\nin the Linux kernel did not properly perform reference counting in some\nerror conditions. A local attacker could possibly use this to cause a\ndenial of service (memory exhaustion). (CVE-2021-45095)\n\nIt was discovered that the eBPF verifier in the Linux kernel did not\nproperly perform bounds checking on mov32 operations. A local attacker\ncould use this to expose sensitive information (kernel pointer addresses). \n(CVE-2021-45402)\n\nIt was discovered that the Reliable Datagram Sockets (RDS) protocol\nimplementation in the Linux kernel did not properly deallocate memory in\nsome error conditions. A local attacker could possibly use this to cause a\ndenial of service (memory exhaustion). (CVE-2021-45480)\n\nIt was discovered that the BPF subsystem in the Linux kernel did not\nproperly track pointer types on atomic fetch operations in some situations. \nA local attacker could use this to expose sensitive information (kernel\npointer addresses). (CVE-2022-0264)\n\nIt was discovered that the TIPC Protocol implementation in the Linux kernel\ndid not properly initialize memory in some situations. A local attacker\ncould use this to expose sensitive information (kernel memory). \n(CVE-2022-0382)\n\nSamuel Page discovered that the Transparent Inter-Process Communication\n(TIPC) protocol implementation in the Linux kernel contained a stack-based\nbuffer overflow. A remote attacker could use this to cause a denial of\nservice (system crash) for systems that have a TIPC bearer configured. \n(CVE-2022-0435)\n\nIt was discovered that the KVM implementation for s390 systems in the Linux\nkernel did not properly prevent memory operations on PVM guests that were\nin non-protected mode. A local attacker could use this to obtain\nunauthorized memory write access. (CVE-2022-0516)\n\nIt was discovered that the ICMPv6 implementation in the Linux kernel did\nnot properly deallocate memory in certain situations. A remote attacker\ncould possibly use this to cause a denial of service (memory exhaustion). \n(CVE-2022-0742)\n\nIt was discovered that the IPsec implementation in the Linux kernel did not\nproperly allocate enough memory when performing ESP transformations,\nleading to a heap-based buffer overflow. A local attacker could use this to\ncause a denial of service (system crash) or possibly execute arbitrary\ncode. (CVE-2022-27666)\n\nUpdate instructions:\n\nThe problem can be corrected by updating your system to the following\npackage versions:\n\nUbuntu 20.04 LTS:\n linux-image-5.13.0-1021-azure 5.13.0-1021.24~20.04.1\n linux-image-5.13.0-1025-oracle 5.13.0-1025.30~20.04.1\n linux-image-azure 5.13.0.1021.24~20.04.10\n linux-image-oracle 5.13.0.1025.30~20.04.1\n\nAfter a standard system update you need to reboot your computer to make\nall the necessary changes. \n\nATTENTION: Due to an unavoidable ABI change the kernel updates have\nbeen given a new version number, which requires you to recompile and\nreinstall all third party kernel modules you might have installed. \nUnless you manually uninstalled the standard kernel metapackages\n(e.g. linux-generic, linux-generic-lts-RELEASE, linux-virtual,\nlinux-powerpc), a standard system upgrade will automatically perform\nthis as well. \n\nReferences:\n https://ubuntu.com/security/notices/USN-5368-1\n CVE-2021-28711, CVE-2021-28712, CVE-2021-28713, CVE-2021-28714,\n CVE-2021-28715, CVE-2021-39685, CVE-2021-39698, CVE-2021-4135,\n CVE-2021-4197, CVE-2021-43975, CVE-2021-44733, CVE-2021-45095,\n CVE-2021-45402, CVE-2021-45480, CVE-2022-0264, CVE-2022-0382,\n CVE-2022-0435, CVE-2022-0492, CVE-2022-0516, CVE-2022-0742,\n CVE-2022-1055, CVE-2022-23222, CVE-2022-27666\n\nPackage Information:\n https://launchpad.net/ubuntu/+source/linux-azure-5.13/5.13.0-1021.24~20.04.1\n https://launchpad.net/ubuntu/+source/linux-oracle-5.13/5.13.0-1025.30~20.04.1\n\n. Description:\n\nRed Hat Advanced Cluster Management for Kubernetes 2.4.5 images\n\nRed Hat Advanced Cluster Management for Kubernetes provides the\ncapabilities to address common challenges that administrators and site\nreliability engineers face as they work across a range of public and\nprivate cloud environments. Clusters and applications are all visible and\nmanaged from a single console\u2014with security policy built in. \nSee the following\nRelease Notes documentation, which will be updated shortly for this\nrelease, for additional details about this release:\n\nhttps://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/\n\nSecurity fixes:\n\n* golang.org/x/crypto: empty plaintext packet causes panic (CVE-2021-43565)\n\n* nconf: Prototype pollution in memory store (CVE-2022-21803)\n\n* golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n(CVE-2022-23806)\n\n* nats-server: misusing the \"dynamically provisioned sandbox accounts\"\nfeature authenticated user can obtain the privileges of the System account\n(CVE-2022-24450)\n\n* Moment.js: Path traversal in moment.locale (CVE-2022-24785)\n\n* dset: Prototype Pollution in dset (CVE-2022-25645)\n\n* golang: syscall: faccessat checks wrong group (CVE-2022-29526)\n\n* go-getter: writes SSH credentials into logfile, exposing sensitive\ncredentials to local uses (CVE-2022-29810)\n\nBug fixes:\n\n* Trying to create a new cluster on vSphere and no feedback, stuck in\n\"creating\" (BZ# 1937078)\n\n* Wrong message is displayed when GRC fails to connect to an Ansible Tower\n(BZ# 2051752)\n\n* multicluster_operators_hub_subscription issues due to /tmp usage (BZ#\n2052702)\n\n* Create Cluster, Worker Pool 2 zones do not load options that relate to\nthe selected Region field (BZ# 2054954)\n\n* Changing the multiclusterhub name other than the default name keeps the\nversion in the web console loading (BZ# 2059822)\n\n* search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade\n(BZ# 2065318)\n\n* Uninstall pod crashed when destroying Azure Gov cluster in ACM (BZ#\n2073562)\n\n* Deprovisioned clusters not filtered out by discovery controller (BZ#\n2075594)\n\n* When deleting a secret for a Helm application, duplicate errors show up\nin topology (BZ# 2075675)\n\n* Changing existing placement rules does not change YAML file Regression\n(BZ# 2075724)\n\n* Editing Helm Argo Applications does not Prune Old Resources (BZ# 2079906)\n\n* Failed to delete the requested resource [404] error appears after\nsubscription is deleted and its placement rule is used in the second\nsubscription (BZ# 2080713)\n\n* Typo in the logs when Deployable is updated in the subscription namespace\n(BZ# 2080960)\n\n* After Argo App Sets are created in an Upgraded Environment, the Clusters\ncolumn does not indicate the clusters (BZ# 2080716)\n\n* RHACM 2.4.5 images (BZ# 2081438)\n\n* Performance issue to get secret in claim-controller (BZ# 2081908)\n\n* Failed to provision openshift 4.10 on bare metal (BZ# 2094109)\n\n3. Bugs fixed (https://bugzilla.redhat.com/):\n\n1937078 - Trying to create a new cluster on vSphere and no feedback, stuck in \"creating\"\n2030787 - CVE-2021-43565 golang.org/x/crypto: empty plaintext packet causes panic\n2051752 - Wrong message is displayed when GRC fails to connect to an ansible tower\n2052573 - CVE-2022-24450 nats-server: misusing the \"dynamically provisioned sandbox accounts\" feature authenticated user can obtain the privileges of the System account\n2052702 - multicluster_operators_hub_subscription issues due to /tmp usage\n2053429 - CVE-2022-23806 golang: crypto/elliptic IsOnCurve returns true for invalid field elements\n2054954 - Create Cluster, Worker Pool 2 zones do not load options that relate to the selected Region field\n2059822 - Changing the multiclusterhub name other than the default name keeps the version in the web console loading. \n2065318 - search-redisgraph-0 generating massive amount of logs after 2.4.2 upgrade\n2072009 - CVE-2022-24785 Moment.js: Path traversal in moment.locale\n2073562 - Uninstall pod crashed when destroying Azure Gov cluster in ACM\n2074689 - CVE-2022-21803 nconf: Prototype pollution in memory store\n2075594 - Deprovisioned clusters not filtered out by discovery controller\n2075675 - When deleting a secret for a Helm application, duplicate errors show up in topology\n2075724 - Changing existing placement rules does not change YAML file\n2079906 - Editing Helm Argo Applications does not Prune Old Resources\n2080279 - CVE-2022-29810 go-getter: writes SSH credentials into logfile, exposing sensitive credentials to local uses\n2080713 - Failed to delete the requested resource [404] error appears after subscription is deleted and it\u0027s placement rule is used in the second subscription [Upgrade]\n2080716 - After Argo App Sets are created in an Upgraded Environment, the Clusters column does not indicate the clusters\n2080847 - CVE-2022-25645 dset: Prototype Pollution in dset\n2080960 - Typo in the logs when Deployable is updated in the subscription namespace\n2081438 - RHACM 2.4.5 images\n2081908 - Performance issue to get secret in claim-controller\n2084085 - CVE-2022-29526 golang: syscall: faccessat checks wrong group\n2094109 - Failed to provision openshift 4.10 on bare metal\n\n5. -----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA256\n\n=====================================================================\n Red Hat Security Advisory\n\nSynopsis: Important: kernel security and bug fix update\nAdvisory ID: RHSA-2022:5626-01\nProduct: Red Hat Enterprise Linux\nAdvisory URL: https://access.redhat.com/errata/RHSA-2022:5626\nIssue date: 2022-07-19\nCVE Names: CVE-2020-29368 CVE-2021-4197 CVE-2021-4203 \n CVE-2022-1012 CVE-2022-1729 CVE-2022-32250 \n=====================================================================\n\n1. Summary:\n\nAn update for kernel is now available for Red Hat Enterprise Linux 8.4\nExtended Update Support. \n\nRed Hat Product Security has rated this update as having a security impact\nof Important. A Common Vulnerability Scoring System (CVSS) base score,\nwhich gives a detailed severity rating, is available for each vulnerability\nfrom the CVE link(s) in the References section. \n\n2. Relevant releases/architectures:\n\nRed Hat CodeReady Linux Builder EUS (v. 8.4) - aarch64, ppc64le, x86_64\nRed Hat Enterprise Linux BaseOS EUS (v.8.4) - aarch64, noarch, ppc64le, s390x, x86_64\n\n3. \n\nSecurity Fix(es):\n\n* kernel: Small table perturb size in the TCP source port generation\nalgorithm can lead to information leak (CVE-2022-1012)\n\n* kernel: race condition in perf_event_open leads to privilege escalation\n(CVE-2022-1729)\n\n* kernel: a use-after-free write in the netfilter subsystem can lead to\nprivilege escalation to root (CVE-2022-32250)\n\n* kernel: cgroup: Use open-time creds and namespace for migration perm\nchecks (CVE-2021-4197)\n\n* kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n(CVE-2021-4203)\n\n* kernel: the copy-on-write implementation can grant unintended write\naccess because of a race condition in a THP mapcount check (CVE-2020-29368)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s) listed in the References section. \n\nBug Fix(es):\n\n* Failed to reboot after crash trigger (BZ#2060747)\n\n* conntrack entries linger around after test (BZ#2066357)\n\n* Enable nested virtualization (BZ#2079070)\n\n* slub corruption during LPM of hnv interface (BZ#2081251)\n\n* sleeping function called from invalid context at\nkernel/locking/spinlock_rt.c:35 (BZ#2082091)\n\n* Backport request of \"genirq: use rcu in kstat_irqs_usr()\" (BZ#2083309)\n\n* ethtool -L may cause system to hang (BZ#2083323)\n\n* For isolated CPUs (with nohz_full enabled for isolated CPUs) CPU\nutilization statistics are not getting reflected continuously (BZ#2084139)\n\n* Affinity broken due to vector space exhaustion (BZ#2084647)\n\n* kernel memory leak while freeing nested actions (BZ#2086597)\n\n* sync rhel-8.6 with upstream 5.13 through 5.16 fixes and improvements\n(BZ#2088037)\n\n* Kernel panic possibly when cleaning namespace on pod deletion\n(BZ#2089539)\n\n* Softirq hrtimers are being placed on the per-CPU softirq clocks on\nisolcpu\u2019s. (BZ#2090485)\n\n* fix missed wake-ups in rq_qos_throttle try two (BZ#2092076)\n\n* NFS4 client experiencing IO outages while sending duplicate SYNs and\nerroneous RSTs during connection reestablishment (BZ#2094334)\n\n* using __this_cpu_read() in preemptible [00000000] code:\nkworker/u66:1/937154 (BZ#2095775)\n\n* Need some changes in RHEL8.x kernels. (BZ#2096932)\n\n4. Solution:\n\nFor details on how to apply this update, which includes the changes\ndescribed in this advisory, refer to:\n\nhttps://access.redhat.com/articles/11258\n\nThe system must be rebooted for this update to take effect. \n\n5. Bugs fixed (https://bugzilla.redhat.com/):\n\n1903244 - CVE-2020-29368 kernel: the copy-on-write implementation can grant unintended write access because of a race condition in a THP mapcount check\n2035652 - CVE-2021-4197 kernel: cgroup: Use open-time creds and namespace for migration perm checks\n2036934 - CVE-2021-4203 kernel: Race condition in races in sk_peer_pid and sk_peer_cred accesses\n2064604 - CVE-2022-1012 kernel: Small table perturb size in the TCP source port generation algorithm can lead to information leak\n2086753 - CVE-2022-1729 kernel: race condition in perf_event_open leads to privilege escalation\n2092427 - CVE-2022-32250 kernel: a use-after-free write in the netfilter subsystem can lead to privilege escalation to root\n\n6. Package List:\n\nRed Hat Enterprise Linux BaseOS EUS (v.8.4):\n\nSource:\nkernel-4.18.0-305.57.1.el8_4.src.rpm\n\naarch64:\nbpftool-4.18.0-305.57.1.el8_4.aarch64.rpm\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-core-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-cross-headers-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-core-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-devel-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-modules-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-modules-extra-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-devel-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-headers-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-modules-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-modules-extra-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-libs-4.18.0-305.57.1.el8_4.aarch64.rpm\nperf-4.18.0-305.57.1.el8_4.aarch64.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\npython3-perf-4.18.0-305.57.1.el8_4.aarch64.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\n\nnoarch:\nkernel-abi-stablelists-4.18.0-305.57.1.el8_4.noarch.rpm\nkernel-doc-4.18.0-305.57.1.el8_4.noarch.rpm\n\nppc64le:\nbpftool-4.18.0-305.57.1.el8_4.ppc64le.rpm\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-core-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-cross-headers-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-core-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-modules-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-modules-extra-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-headers-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-modules-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-modules-extra-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-libs-4.18.0-305.57.1.el8_4.ppc64le.rpm\nperf-4.18.0-305.57.1.el8_4.ppc64le.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\npython3-perf-4.18.0-305.57.1.el8_4.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\n\ns390x:\nbpftool-4.18.0-305.57.1.el8_4.s390x.rpm\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-core-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-cross-headers-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-core-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-devel-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-modules-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debug-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-debuginfo-common-s390x-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-devel-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-headers-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-modules-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-tools-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-core-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-devel-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-modules-4.18.0-305.57.1.el8_4.s390x.rpm\nkernel-zfcpdump-modules-extra-4.18.0-305.57.1.el8_4.s390x.rpm\nperf-4.18.0-305.57.1.el8_4.s390x.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\npython3-perf-4.18.0-305.57.1.el8_4.s390x.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.s390x.rpm\n\nx86_64:\nbpftool-4.18.0-305.57.1.el8_4.x86_64.rpm\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-core-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-cross-headers-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-core-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-devel-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-modules-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-modules-extra-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-devel-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-headers-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-modules-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-modules-extra-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-libs-4.18.0-305.57.1.el8_4.x86_64.rpm\nperf-4.18.0-305.57.1.el8_4.x86_64.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\npython3-perf-4.18.0-305.57.1.el8_4.x86_64.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\n\nRed Hat CodeReady Linux Builder EUS (v. 8.4):\n\naarch64:\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-debuginfo-common-aarch64-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\nkernel-tools-libs-devel-4.18.0-305.57.1.el8_4.aarch64.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.aarch64.rpm\n\nppc64le:\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-debuginfo-common-ppc64le-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\nkernel-tools-libs-devel-4.18.0-305.57.1.el8_4.ppc64le.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.ppc64le.rpm\n\nx86_64:\nbpftool-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debug-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-debuginfo-common-x86_64-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\nkernel-tools-libs-devel-4.18.0-305.57.1.el8_4.x86_64.rpm\nperf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\npython3-perf-debuginfo-4.18.0-305.57.1.el8_4.x86_64.rpm\n\nThese packages are GPG signed by Red Hat for security. Our key and\ndetails on how to verify the signature are available from\nhttps://access.redhat.com/security/team/key/\n\n7. Contact:\n\nThe Red Hat security contact is \u003csecalert@redhat.com\u003e. More contact\ndetails at https://access.redhat.com/security/team/contact/\n\nCopyright 2022 Red Hat, Inc. \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niQIVAwUBYuFkSdzjgjWX9erEAQhDCxAAknsy8K3eg1J603gMndUGWfI/Fs5VzIaH\nlxGavTw8H57lXRWbQYqJoRKk42uHAH2iCicovyvowJ5SdfnChtAVbG1A1wjJLmJJ\n0YDKoeMn3s1jjThivm5rWGQVdImqLw+CxVvb3Pywv6ZswTI5r4ZB4FEXW8GIR1w2\n1FeHTcwUgNLzeBLdVem1T50lWERG0j0ZGUmv9mu4QMDeWXoSoPcHKWnsmLgDvQif\ndVky3UsFoCJ783WJOIctmY97kOffqIDvZdbPwajAyTByspumtcwt6N7wMU6VfI+u\nB6bRGQgLbElY6IniLUsV7MG8GbbffZvPFNN/n6LdnnFgEt1eDlo6LkZCyPaMbEfx\n2dMxJtcAiXmydMs5QXvNJ3y2UR2fp/iHF8euAnSN3eKTLAxDQwo3c4KvNUKAfFcF\nOAjbyLTilLhiPHRARG4aEWCEUSmfzO3rulNhRcIEWtNIira3/QMFG9qUjNAMvzU1\nM4tMSPkH35gx49p2a6arZceUGDXiRwvrP142GzpAgRWt/GrydjAsRiG4pJM2H5TW\nnB5q7OuwEvch8+o8gJril5uOpm6eI1lylv9wTXbwjpzQqL5k2JcgByRWx8wLqYXy\nwXsBm+JZL9ztSadqoVsFWSqC0yeRkuF185F4gI7+7azjpeQhHtJEix3bgqRhIzK4\n07JERnC1IRg=\n=y1sh\n-----END PGP SIGNATURE-----\n--\nRHSA-announce mailing list\nRHSA-announce@redhat.com\nhttps://listman.redhat.com/mailman/listinfo/rhsa-announce\n. Summary:\n\nThe Migration Toolkit for Containers (MTC) 1.7.2 is now available. Description:\n\nThe Migration Toolkit for Containers (MTC) enables you to migrate\nKubernetes resources, persistent volume data, and internal container images\nbetween OpenShift Container Platform clusters, using the MTC web console or\nthe Kubernetes API. Bugs fixed (https://bugzilla.redhat.com/):\n\n2007557 - CVE-2021-3807 nodejs-ansi-regex: Regular expression denial of service (ReDoS) matching ANSI escape codes\n2038898 - [UI] ?Update Repository? option not getting disabled after adding the Replication Repository details to the MTC web console\n2040693 - ?Replication repository? wizard has no validation for name length\n2040695 - [MTC UI] ?Add Cluster? wizard stucks when the cluster name length is more than 63 characters\n2044591 - CVE-2022-0235 node-fetch: exposure of sensitive information to an unauthorized actor\n2048537 - Exposed route host to image registry? connecting successfully to invalid registry ?xyz.com?\n2053259 - CVE-2022-0536 follow-redirects: Exposure of Sensitive Information via Authorization Header leak\n2055658 - [MTC UI] Cancel button on ?Migrations? page does not disappear when migration gets Failed/Succeeded with warnings\n2056962 - [MTC UI] UI shows the wrong migration type info after changing the target namespace\n2058172 - [MTC UI] Successful Rollback is not showing the green success icon in the ?Last State? field. \n2058529 - [MTC UI] Migrations Plan is missing the type for the state migration performed before upgrade\n2061335 - [MTC UI] ?Update cluster? button is not getting disabled\n2062266 - MTC UI does not display logs properly [OADP-BL]\n2062862 - [MTC UI] Clusters page behaving unexpectedly on deleting the remote cluster?s service account secret from backend\n2074675 - HPAs of DeploymentConfigs are not being updated when migration from Openshift 3.x to Openshift 4.x\n2076593 - Velero pod log missing from UI drop down\n2076599 - Velero pod log missing from downloaded logs folder [OADP-BL]\n2078459 - [MTC UI] Storageclass conversion plan is adding migstorage reference in migplan\n2079252 - [MTC] Rsync options logs not visible in log-reader pod\n2082221 - Don\u0027t allow Storage class conversion migration if source cluster has only one storage class defined [UI]\n2082225 - non-numeric user when launching stage pods [OADP-BL]\n2088022 - Default CPU requests on Velero/Restic are too demanding making scheduling fail in certain environments\n2088026 - Cloud propagation phase in migration controller is not doing anything due to missing labels on Velero pods\n2089126 - [MTC] Migration controller cannot find Velero Pod because of wrong labels\n2089411 - [MTC] Log reader pod is missing velero and restic pod logs [OADP-BL]\n2089859 - [Crane] DPA CR is missing the required flag - Migration is getting failed at the EnsureCloudSecretPropagated phase due to the missing secret VolumeMounts\n2090317 - [MTC] mig-operator failed to create a DPA CR due to null values are passed instead of int [OADP-BL]\n2096939 - Fix legacy operator.yml inconsistencies and errors\n2100486 - [MTC UI] Target storage class field is not getting respected when clusters don\u0027t have replication repo configured. Description:\n\nRed Hat OpenShift Container Platform is Red Hat\u0027s cloud computing\nKubernetes application platform solution designed for on-premise or private\ncloud deployments. \n\nThis advisory contains the container images for Red Hat OpenShift Container\nPlatform 4.9.45. See the following advisory for the RPM packages for this\nrelease:\n\nhttps://access.redhat.com/errata/RHBA-2022:5878\n\nSpace precludes documenting all of the container images in this advisory. \nSee the following Release Notes documentation, which will be updated\nshortly for this release, for details about these changes:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nSecurity Fix(es):\n\n* openshift: oauth-serving-cert configmap contains cluster certificate\nprivate key (CVE-2022-2403)\n\nFor more details about the security issue(s), including the impact, a CVSS\nscore, acknowledgments, and other related information, refer to the CVE\npage(s)\nlisted in the References section. \n\nYou may download the oc tool and use it to inspect release image metadata\nas follows:\n\n(For x86_64 architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.9.45-x86_64\n\nThe image digest is\nsha256:8ab373599e8a010dffb9c7ed45e01c00cb06a7857fe21de102d978be4738b2ec\n\n(For s390x architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.9.45-s390x\n\nThe image digest is\nsha256:1dde8a7134081c82012a812e014daca4cba1095630e6d0c74b51da141d472984\n\n(For ppc64le architecture)\n\n$ oc adm release info\nquay.io/openshift-release-dev/ocp-release:4.9.45-ppc64le\n\nThe image digest is\nsha256:ec1fac628bec05eb6425c2ae9dcd3fca120cd1a8678155350bb4c65813cfc30e\n\nAll OpenShift Container Platform 4.9 users are advised to upgrade to these\nupdated packages and images when they are available in the appropriate\nrelease channel. To check for available updates, use the OpenShift Console\nor the CLI oc command. Instructions for upgrading a cluster are available\nat\nhttps://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html\n\n3. Solution:\n\nFor OpenShift Container Platform 4.9 see the following documentation, which\nwill be updated shortly for this release, for important instructions on how\nto upgrade your cluster and fully apply this asynchronous errata update:\n\nhttps://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html\n\nDetails on how to access this content are available at\nhttps://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html\n\n4. Bugs fixed (https://bugzilla.redhat.com/):\n\n2009024 - Unable to complete cluster destruction, some ports are left over\n2055494 - console operator should report Upgradeable False when SAN-less certs are used\n2083554 - post 1.23 rebase: regression in service-load balancer reliability\n2087021 - configure-ovs.sh fails, blocking new RHEL node from being scaled up on cluster without manual reboot\n2088539 - Openshift route URLs starting with double slashes stopped working after update to 4.8.33 - curl version problems\n2091806 - Cluster upgrade stuck due to \"resource deletions in progress\"\n2095320 - [4.9] Bootimage bump tracker\n2097157 - [4.9z] During ovnkube-node restart all host conntrack entries are flushed, leading to traffic disruption\n2100786 - [OCP 4.9] Ironic cannot match \"wwn\" rootDeviceHint for a multipath device\n2101664 - disabling ipv6 router advertisements using \"all\" does not disable it on secondary interfaces\n2101959 - CVE-2022-2403 openshift: oauth-serving-cert configmap contains cluster certificate private key\n2103982 - [4.9] AWS EBS CSI driver stuck removing EBS volumes - GetDeviceMountRefs check failed\n2105277 - NetworkPolicies: ovnkube-master pods crashing due to panic: \"invalid memory address or nil pointer dereference\"\n2105453 - Node reboot causes duplicate persistent volumes\n2105654 - egressIP panics with nil pointer dereference\n2105663 - APIRequestCount does not identify some APIs removed in 4.9\n2106655 - Kubelet slowly leaking memory and pods eventually unable to start\n2108538 - [4.9.z backport] br-ex not created due to default bond interface having a different mac address than expected\n2108619 - ClusterVersion history pruner does not always retain initial completed update entry\n\n5", "sources": [ { "db": "NVD", "id": "CVE-2021-4197" }, { "db": "JVNDB", "id": "JVNDB-2021-019487" }, { "db": "VULHUB", "id": "VHN-410862" }, { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "167602" }, { "db": "PACKETSTORM", "id": "167852" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "168019" }, { "db": "PACKETSTORM", "id": "167886" } ], "trust": 2.34 }, "exploit_availability": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/exploit_availability#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "reference": "https://www.scap.org.cn/vuln/vhn-410862", "trust": 0.1, "type": "unknown" } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" } ] }, "external_ids": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/external_ids#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "db": "NVD", "id": "CVE-2021-4197", "trust": 3.4 }, { "db": "JVNDB", "id": "JVNDB-2021-019487", "trust": 0.8 }, { "db": "PACKETSTORM", "id": "168019", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167886", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167852", "trust": 0.2 }, { "db": "PACKETSTORM", "id": "167694", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167746", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167443", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "168136", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166392", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167097", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167952", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167748", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167822", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167714", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167072", "trust": 0.1 }, { "db": "CNNVD", "id": "CNNVD-202201-1396", "trust": 0.1 }, { "db": "CNVD", "id": "CNVD-2022-68560", "trust": 0.1 }, { "db": "VULHUB", "id": "VHN-410862", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "166636", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167602", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167622", "trust": 0.1 }, { "db": "PACKETSTORM", "id": "167679", "trust": 0.1 } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "JVNDB", "id": "JVNDB-2021-019487" }, { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "167602" }, { "db": "PACKETSTORM", "id": "167852" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "168019" }, { "db": "PACKETSTORM", "id": "167886" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "id": "VAR-202201-0496", "iot": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/iot#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": true, "sources": [ { "db": "VULHUB", "id": "VHN-410862" } ], "trust": 0.725 }, "last_update_date": "2024-07-23T19:59:00.365000Z", "patch": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/patch#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "title": "NTAP-20220602-0006 Oracle Oracle\u00a0Critical\u00a0Patch\u00a0Update", "trust": 0.8, "url": "https://www.broadcom.com/" } ], "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-019487" } ] }, "problemtype_data": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/problemtype_data#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "problemtype": "CWE-287", "trust": 1.1 }, { "problemtype": "Inappropriate authentication (CWE-287) [NVD evaluation ]", "trust": 0.8 } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "JVNDB", "id": "JVNDB-2021-019487" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "references": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/references#", "data": { "@container": "@list" }, "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": [ { "trust": 1.9, "url": "https://www.debian.org/security/2022/dsa-5127" }, { "trust": 1.9, "url": "https://www.debian.org/security/2022/dsa-5173" }, { "trust": 1.9, "url": "https://bugzilla.redhat.com/show_bug.cgi?id=2035652" }, { "trust": 1.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4197" }, { "trust": 1.1, "url": "https://security.netapp.com/advisory/ntap-20220602-0006/" }, { "trust": 1.1, "url": "https://www.oracle.com/security-alerts/cpujul2022.html" }, { "trust": 1.0, "url": "https://lore.kernel.org/lkml/20211209214707.805617-1-tj%40kernel.org/t/" }, { "trust": 0.5, "url": "https://access.redhat.com/security/team/contact/" }, { "trust": 0.5, "url": "https://bugzilla.redhat.com/):" }, { "trust": 0.5, "url": "https://listman.redhat.com/mailman/listinfo/rhsa-announce" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-4197" }, { "trust": 0.5, "url": "https://access.redhat.com/security/cve/cve-2021-4203" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3752" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4157" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3744" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-13974" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-41617" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-45485" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3773" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4002" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-29154" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43976" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-0941" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43389" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3634" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-27820" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4189" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-44733" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1271" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-21781" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3634" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4037" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-29154" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-37159" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-4788" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3772" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-0404" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3669" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3764" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-20322" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-43056" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3612" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-41864" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2018-25032" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-0941" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3612" }, { "trust": 0.3, "url": "https://access.redhat.com/security/updates/classification/#moderate" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-26401" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-27820" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3743" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3737" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-1011" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-13974" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-20322" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-4083" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-45486" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0322" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2020-4788" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-26401" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0286" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0001" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-3759" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-21781" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2022-0002" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2018-25032" }, { "trust": 0.3, "url": "https://access.redhat.com/security/cve/cve-2021-42739" }, { "trust": 0.3, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-0404" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-19131" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3696" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-38185" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28733" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-21803" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-29526" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28736" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3697" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28734" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-25219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28737" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-25219" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2021-3695" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-28735" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-24785" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-23806" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-29810" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-19131" }, { "trust": 0.2, "url": "https://access.redhat.com/security/updates/classification/#important" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1729" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-32250" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4203" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1729" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-1012" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2020-29368" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1012" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-32250" }, { "trust": 0.2, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-29368" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0235" }, { "trust": 0.2, "url": "https://access.redhat.com/security/cve/cve-2022-0536" }, { "trust": 0.1, "url": "https://lore.kernel.org/lkml/20211209214707.805617-1-tj@kernel.org/t/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-44733" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28711" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-28715" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39685" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45402" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0382" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-23222" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1055" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0264" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-39698" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-43975" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-azure-5.13/5.13.0-1021.24~20.04.1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-4135" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0492" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0516" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45095" }, { "trust": 0.1, "url": "https://launchpad.net/ubuntu/+source/linux-oracle-5.13/5.13.0-1025.30~20.04.1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-27666" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0742" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5368-1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-0435" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-45480" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-25645" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-43565" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5201" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-24450" }, { "trust": 0.1, "url": "https://access.redhat.com/articles/11258" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5626" }, { "trust": 0.1, "url": "https://access.redhat.com/security/team/key/" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2021-3669" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1708" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html-single/install/index#installing" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-0492" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5392" }, { "trust": 0.1, "url": "https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/release_notes/index" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/latest/migration_toolkit_for_containers/installing-mtc.html" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2021-3807" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-1154" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2020-35492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-26691" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5483" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-23852" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2020-35492" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-34169" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21540" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-cli.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21540" }, { "trust": 0.1, "url": "https://docs.openshift.com/container-platform/4.9/release_notes/ocp-4-9-release-notes.html" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-21541" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-34169" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-21541" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2403" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhba-2022:5878" }, { "trust": 0.1, "url": "https://access.redhat.com/security/cve/cve-2022-2403" }, { "trust": 0.1, "url": "https://access.redhat.com/errata/rhsa-2022:5879" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-2380" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1011" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28388" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1199" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1198" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-28389" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1205" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1516" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1204" }, { "trust": 0.1, "url": "https://ubuntu.com/security/notices/usn-5541-1" }, { "trust": 0.1, "url": "https://nvd.nist.gov/vuln/detail/cve-2022-1353" } ], "sources": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "JVNDB", "id": "JVNDB-2021-019487" }, { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "167602" }, { "db": "PACKETSTORM", "id": "167852" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "168019" }, { "db": "PACKETSTORM", "id": "167886" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "sources": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#", "data": { "@container": "@list" } }, "data": [ { "db": "VULHUB", "id": "VHN-410862" }, { "db": "JVNDB", "id": "JVNDB-2021-019487" }, { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "167602" }, { "db": "PACKETSTORM", "id": "167852" }, { "db": "PACKETSTORM", "id": "167622" }, { "db": "PACKETSTORM", "id": "167679" }, { "db": "PACKETSTORM", "id": "168019" }, { "db": "PACKETSTORM", "id": "167886" }, { "db": "NVD", "id": "CVE-2021-4197" } ] }, "sources_release_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_release_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2022-03-23T00:00:00", "db": "VULHUB", "id": "VHN-410862" }, { "date": "2023-08-02T00:00:00", "db": "JVNDB", "id": "JVNDB-2021-019487" }, { "date": "2022-04-07T16:37:07", "db": "PACKETSTORM", "id": "166636" }, { "date": "2022-06-28T15:20:26", "db": "PACKETSTORM", "id": "167602" }, { "date": "2022-07-27T17:32:01", "db": "PACKETSTORM", "id": "167852" }, { "date": "2022-06-29T20:27:02", "db": "PACKETSTORM", "id": "167622" }, { "date": "2022-07-01T15:04:32", "db": "PACKETSTORM", "id": "167679" }, { "date": "2022-08-10T15:50:18", "db": "PACKETSTORM", "id": "168019" }, { "date": "2022-07-29T14:39:49", "db": "PACKETSTORM", "id": "167886" }, { "date": "2022-03-23T20:15:10.200000", "db": "NVD", "id": "CVE-2021-4197" } ] }, "sources_update_date": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources_update_date#", "data": { "@container": "@list" } }, "data": [ { "date": "2023-02-03T00:00:00", "db": "VULHUB", "id": "VHN-410862" }, { "date": "2023-08-02T06:47:00", "db": "JVNDB", "id": "JVNDB-2021-019487" }, { "date": "2023-11-07T03:40:21.077000", "db": "NVD", "id": "CVE-2021-4197" } ] }, "threat_type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/threat_type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "local", "sources": [ { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "167886" } ], "trust": 0.2 }, "title": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/title#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "Linux\u00a0Kernel\u00a0 Authentication vulnerability in", "sources": [ { "db": "JVNDB", "id": "JVNDB-2021-019487" } ], "trust": 0.8 }, "type": { "@context": { "@vocab": "https://www.variotdbs.pl/ref/type#", "sources": { "@container": "@list", "@context": { "@vocab": "https://www.variotdbs.pl/ref/sources#" } } }, "data": "arbitrary", "sources": [ { "db": "PACKETSTORM", "id": "166636" }, { "db": "PACKETSTORM", "id": "167886" } ], "trust": 0.2 } }
Sightings
Author | Source | Type | Date |
---|
Nomenclature
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.