Re: Linux 6.12.75

From: Sasha Levin

Date: Wed Mar 04 2026 - 08:17:09 EST


diff --git a/Documentation/PCI/endpoint/pci-vntb-howto.rst b/Documentation/PCI/endpoint/pci-vntb-howto.rst
index 70d3bc90893f3..949c0d35694c2 100644
--- a/Documentation/PCI/endpoint/pci-vntb-howto.rst
+++ b/Documentation/PCI/endpoint/pci-vntb-howto.rst
@@ -52,14 +52,14 @@ pci-epf-vntb device, the following commands can be used::
# cd /sys/kernel/config/pci_ep/
# mkdir functions/pci_epf_vntb/func1

-The "mkdir func1" above creates the pci-epf-ntb function device that will
+The "mkdir func1" above creates the pci-epf-vntb function device that will
be probed by pci_epf_vntb driver.

The PCI endpoint framework populates the directory with the following
configurable fields::

- # ls functions/pci_epf_ntb/func1
- baseclass_code deviceid msi_interrupts pci-epf-ntb.0
+ # ls functions/pci_epf_vntb/func1
+ baseclass_code deviceid msi_interrupts pci-epf-vntb.0
progif_code secondary subsys_id vendorid
cache_line_size interrupt_pin msix_interrupts primary
revid subclass_code subsys_vendor_id
@@ -106,13 +106,13 @@ A sample configuration for virtual NTB driver for virtual PCI bus::
# echo 0x080A > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vntb_pid
# echo 0x10 > functions/pci_epf_vntb/func1/pci_epf_vntb.0/vbus_number

-Binding pci-epf-ntb Device to EP Controller
+Binding pci-epf-vntb Device to EP Controller
--------------------------------------------

NTB function device should be attached to PCI endpoint controllers
connected to the host.

- # ln -s controllers/5f010000.pcie_ep functions/pci-epf-ntb/func1/primary
+ # ln -s controllers/5f010000.pcie_ep functions/pci_epf_vntb/func1/primary

Once the above step is completed, the PCI endpoint controllers are ready to
establish a link with the host.
@@ -134,7 +134,7 @@ lspci Output at Host side
-------------------------

Note that the devices listed here correspond to the values populated in
-"Creating pci-epf-ntb Device" section above::
+"Creating pci-epf-vntb Device" section above::

# lspci
00:00.0 PCI bridge: Freescale Semiconductor Inc Device 0000 (rev 01)
@@ -147,7 +147,7 @@ lspci Output at EP Side / Virtual PCI bus
-----------------------------------------

Note that the devices listed here correspond to the values populated in
-"Creating pci-epf-ntb Device" section above::
+"Creating pci-epf-vntb Device" section above::

# lspci
10:00.0 Unassigned class [ffff]: Dawicontrol Computersysteme GmbH Device 1234 (rev ff)
diff --git a/Documentation/devicetree/bindings/phy/qcom,edp-phy.yaml b/Documentation/devicetree/bindings/phy/qcom,edp-phy.yaml
index 4e15d90d08b0e..27d6bbed5da0f 100644
--- a/Documentation/devicetree/bindings/phy/qcom,edp-phy.yaml
+++ b/Documentation/devicetree/bindings/phy/qcom,edp-phy.yaml
@@ -31,12 +31,15 @@ properties:
- description: PLL register block

clocks:
- maxItems: 2
+ minItems: 2
+ maxItems: 3

clock-names:
+ minItems: 2
items:
- const: aux
- const: cfg_ahb
+ - const: ref

"#clock-cells":
const: 1
@@ -58,6 +61,29 @@ required:
- "#clock-cells"
- "#phy-cells"

+allOf:
+ - if:
+ properties:
+ compatible:
+ enum:
+ - qcom,x1e80100-dp-phy
+ then:
+ properties:
+ clocks:
+ minItems: 3
+ maxItems: 3
+ clock-names:
+ minItems: 3
+ maxItems: 3
+ else:
+ properties:
+ clocks:
+ minItems: 2
+ maxItems: 2
+ clock-names:
+ minItems: 2
+ maxItems: 2
+
additionalProperties: false

examples:
diff --git a/Documentation/devicetree/bindings/sound/asahi-kasei,ak4458.yaml b/Documentation/devicetree/bindings/sound/asahi-kasei,ak4458.yaml
index 4477f84b7acc0..e44801ddbd2a9 100644
--- a/Documentation/devicetree/bindings/sound/asahi-kasei,ak4458.yaml
+++ b/Documentation/devicetree/bindings/sound/asahi-kasei,ak4458.yaml
@@ -18,10 +18,10 @@ properties:
reg:
maxItems: 1

- avdd-supply:
+ AVDD-supply:
description: Analog power supply

- dvdd-supply:
+ DVDD-supply:
description: Digital power supply

reset-gpios:
@@ -56,7 +56,7 @@ allOf:
properties:
dsd-path: false

-additionalProperties: false
+unevaluatedProperties: false

examples:
- |
diff --git a/Documentation/devicetree/bindings/sound/asahi-kasei,ak5558.yaml b/Documentation/devicetree/bindings/sound/asahi-kasei,ak5558.yaml
index d3d494ae8abfe..dc8f85f266bf3 100644
--- a/Documentation/devicetree/bindings/sound/asahi-kasei,ak5558.yaml
+++ b/Documentation/devicetree/bindings/sound/asahi-kasei,ak5558.yaml
@@ -19,10 +19,10 @@ properties:
reg:
maxItems: 1

- avdd-supply:
+ AVDD-supply:
description: A 1.8V supply that powers up the AVDD pin.

- dvdd-supply:
+ DVDD-supply:
description: A 1.2V supply that powers up the DVDD pin.

reset-gpios:
diff --git a/Documentation/hwmon/mpq8785.rst b/Documentation/hwmon/mpq8785.rst
index bf8176b870868..b91fefb1a84cd 100644
--- a/Documentation/hwmon/mpq8785.rst
+++ b/Documentation/hwmon/mpq8785.rst
@@ -5,6 +5,7 @@ Kernel driver mpq8785

Supported chips:

+ * MPS MPM82504
* MPS MPQ8785

Prefix: 'mpq8785'
@@ -14,6 +15,14 @@ Author: Charles Hsu <ythsu0511@xxxxxxxxx>
Description
-----------

+The MPM82504 is a quad 25A, scalable, fully integrated power module with a PMBus
+interface. The device offers a complete power solution that achieves up to 25A
+per output channel. The MPM82504 has four output channels that can be paralleled
+to provide 50A, 75A, or 100A of output current for flexible configurations.
+The device can also operate in parallel with the MPM3695-100 and additional
+MPM82504 devices to provide a higher output current. The MPM82504 operates
+at high efficiency across a wide load range.
+
The MPQ8785 is a fully integrated, PMBus-compatible, high-frequency, synchronous
buck converter. The MPQ8785 offers a very compact solution that achieves up to
40A output current per phase, with excellent load and line regulation over a
@@ -23,18 +32,19 @@ output current load range.
The PMBus interface provides converter configurations and key parameters
monitoring.

-The MPQ8785 adopts MPS's proprietary multi-phase digital constant-on-time (MCOT)
+The devices adopts MPS's proprietary multi-phase digital constant-on-time (MCOT)
control, which provides fast transient response and eases loop stabilization.
-The MCOT scheme also allows multiple MPQ8785 devices to be connected in parallel
-with excellent current sharing and phase interleaving for high-current
+The MCOT scheme also allows multiple devices or channels to be connected in
+parallel with excellent current sharing and phase interleaving for high-current
applications.

Fully integrated protection features include over-current protection (OCP),
over-voltage protection (OVP), under-voltage protection (UVP), and
over-temperature protection (OTP).

-The MPQ8785 requires a minimal number of readily available, standard external
-components, and is available in a TLGA (5mmx6mm) package.
+All supported modules require a minimal number of readily available, standard
+external components. The MPM82504 is available in a BGA (15mmx30mmx5.18mm)
+package and the MPQ8785 is available in a TLGA (5mmx6mm) package.

Device compliant with:

diff --git a/Documentation/trace/events-pci.rst b/Documentation/trace/events-pci.rst
new file mode 100644
index 0000000000000..03ff4ad30ddfa
--- /dev/null
+++ b/Documentation/trace/events-pci.rst
@@ -0,0 +1,74 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================
+Subsystem Trace Points: PCI
+===========================
+
+Overview
+========
+The PCI tracing system provides tracepoints to monitor critical hardware events
+that can impact system performance and reliability. These events normally show
+up here:
+
+ /sys/kernel/tracing/events/pci
+
+Cf. include/trace/events/pci.h for the events definitions.
+
+Available Tracepoints
+=====================
+
+pci_hp_event
+------------
+
+Monitors PCI hotplug events including card insertion/removal and link
+state changes.
+::
+
+ pci_hp_event "%s slot:%s, event:%s\n"
+
+**Event Types**:
+
+* ``LINK_UP`` - PCIe link established
+* ``LINK_DOWN`` - PCIe link lost
+* ``CARD_PRESENT`` - Card detected in slot
+* ``CARD_NOT_PRESENT`` - Card removed from slot
+
+**Example Usage**::
+
+ # Enable the tracepoint
+ echo 1 > /sys/kernel/debug/tracing/events/pci/pci_hp_event/enable
+
+ # Monitor events (the following output is generated when a device is hotplugged)
+ cat /sys/kernel/debug/tracing/trace_pipe
+ irq/51-pciehp-88 [001] ..... 1311.177459: pci_hp_event: 0000:00:02.0 slot:10, event:CARD_PRESENT
+
+ irq/51-pciehp-88 [001] ..... 1311.177566: pci_hp_event: 0000:00:02.0 slot:10, event:LINK_UP
+
+pcie_link_event
+---------------
+
+Monitors PCIe link speed changes and provides detailed link status information.
+::
+
+ pcie_link_event "%s type:%d, reason:%d, cur_bus_speed:%d, max_bus_speed:%d, width:%u, flit_mode:%u, status:%s\n"
+
+**Parameters**:
+
+* ``type`` - PCIe device type (4=Root Port, etc.)
+* ``reason`` - Reason for link change:
+
+ - ``0`` - Link retrain
+ - ``1`` - Bus enumeration
+ - ``2`` - Bandwidth notification enable
+ - ``3`` - Bandwidth notification IRQ
+ - ``4`` - Hotplug event
+
+
+**Example Usage**::
+
+ # Enable the tracepoint
+ echo 1 > /sys/kernel/debug/tracing/events/pci/pcie_link_event/enable
+
+ # Monitor events (the following output is generated when a device is hotplugged)
+ cat /sys/kernel/debug/tracing/trace_pipe
+ irq/51-pciehp-88 [001] ..... 381.545386: pcie_link_event: 0000:00:02.0 type:4, reason:4, cur_bus_speed:20, max_bus_speed:23, width:1, flit_mode:0, status:DLLLA
diff --git a/Documentation/trace/index.rst b/Documentation/trace/index.rst
index 0b300901fd750..e9bcb9d9f7f3b 100644
--- a/Documentation/trace/index.rst
+++ b/Documentation/trace/index.rst
@@ -1,38 +1,104 @@
-==========================
-Linux Tracing Technologies
-==========================
+================================
+Linux Tracing Technologies Guide
+================================
+
+Tracing in the Linux kernel is a powerful mechanism that allows
+developers and system administrators to analyze and debug system
+behavior. This guide provides documentation on various tracing
+frameworks and tools available in the Linux kernel.
+
+Introduction to Tracing
+-----------------------
+
+This section provides an overview of Linux tracing mechanisms
+and debugging approaches.

.. toctree::
:maxdepth: 2

- ftrace-design
+ debugging
+ tracepoints
tracepoint-analysis
+ ring-buffer-map
+
+Core Tracing Frameworks
+-----------------------
+
+The following are the primary tracing frameworks integrated into
+the Linux kernel.
+
+.. toctree::
+ :maxdepth: 1
+
ftrace
+ ftrace-design
ftrace-uses
- fprobe
kprobes
kprobetrace
- uprobetracer
fprobetrace
- tracepoints
+ fprobe
+ ring-buffer-design
+
+Event Tracing and Analysis
+--------------------------
+
+A detailed explanation of event tracing mechanisms and their
+applications.
+
+.. toctree::
+ :maxdepth: 1
+
events
events-kmem
events-power
events-nmi
events-msr
- mmiotrace
+ events-pci
+ boottime-trace
histogram
histogram-design
- boottime-trace
- hwlat_detector
- osnoise-tracer
- timerlat-tracer
+
+Hardware and Performance Tracing
+--------------------------------
+
+This section covers tracing features that monitor hardware
+interactions and system performance.
+
+.. toctree::
+ :maxdepth: 1
+
intel_th
- ring-buffer-design
- ring-buffer-map
stm
sys-t
coresight/index
- user_events
rv/index
hisi-ptt
+ mmiotrace
+ hwlat_detector
+ osnoise-tracer
+ timerlat-tracer
+
+User-Space Tracing
+------------------
+
+These tools allow tracing user-space applications and
+interactions.
+
+.. toctree::
+ :maxdepth: 1
+
+ user_events
+ uprobetracer
+
+Additional Resources
+--------------------
+
+For more details, refer to the respective documentation of each
+tracing tool and framework.
+
+.. only:: subproject and html
+
+ Indices
+ =======
+
+ * :ref:`genindex`
diff --git a/Makefile b/Makefile
index 94c1c8c7f899f..882a4dd080e67 100644
--- a/Makefile
+++ b/Makefile
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0
VERSION = 6
PATCHLEVEL = 12
-SUBLEVEL = 74
+SUBLEVEL = 75
EXTRAVERSION =
NAME = Baby Opossum Posse

@@ -1370,6 +1370,15 @@ ifneq ($(wildcard $(resolve_btfids_O)),)
$(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean
endif

+PHONY += objtool_clean
+
+objtool_O = $(abspath $(objtree))/tools/objtool
+
+objtool_clean:
+ifneq ($(wildcard $(objtool_O)),)
+ $(Q)$(MAKE) -sC $(abs_srctree)/tools/objtool O=$(objtool_O) srctree=$(abs_srctree) clean
+endif
+
tools/: FORCE
$(Q)mkdir -p $(objtree)/tools
$(Q)$(MAKE) O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/
@@ -1527,7 +1536,7 @@ vmlinuxclean:
$(Q)$(CONFIG_SHELL) $(srctree)/scripts/link-vmlinux.sh clean
$(Q)$(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) clean)

-clean: archclean vmlinuxclean resolve_btfids_clean
+clean: archclean vmlinuxclean resolve_btfids_clean objtool_clean

# mrproper - Delete all generated files, including .config
#
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 02e8817a89212..8b620551304e9 100644
--- a/arch/alpha/include/asm/pgtable.h
+++ b/arch/alpha/include/asm/pgtable.h
@@ -17,6 +17,7 @@
#include <asm/processor.h> /* For TASK_SIZE */
#include <asm/machvec.h>
#include <asm/setup.h>
+#include <linux/page_table_check.h>

struct mm_struct;
struct vm_area_struct;
@@ -213,6 +214,9 @@ extern inline void pud_set(pud_t * pudp, pmd_t * pmdp)
{ pud_val(*pudp) = _PAGE_TABLE | ((((unsigned long) pmdp) - PAGE_OFFSET) << (32-PAGE_SHIFT)); }


+extern void migrate_flush_tlb_page(struct vm_area_struct *vma,
+ unsigned long addr);
+
extern inline unsigned long
pmd_page_vaddr(pmd_t pmd)
{
@@ -232,7 +236,7 @@ extern inline int pte_none(pte_t pte) { return !pte_val(pte); }
extern inline int pte_present(pte_t pte) { return pte_val(pte) & _PAGE_VALID; }
extern inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{
- pte_val(*ptep) = 0;
+ WRITE_ONCE(pte_val(*ptep), 0);
}

extern inline int pmd_none(pmd_t pmd) { return !pmd_val(pmd); }
@@ -294,6 +298,33 @@ extern inline pte_t * pte_offset_kernel(pmd_t * dir, unsigned long address)

extern pgd_t swapper_pg_dir[1024];

+#ifdef CONFIG_COMPACTION
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+
+static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long address,
+ pte_t *ptep)
+{
+ pte_t pte = READ_ONCE(*ptep);
+
+ pte_clear(mm, address, ptep);
+ return pte;
+}
+
+#define __HAVE_ARCH_PTEP_CLEAR_FLUSH
+
+static inline pte_t ptep_clear_flush(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ pte_t pte = ptep_get_and_clear(mm, addr, ptep);
+
+ page_table_check_pte_clear(mm, pte);
+ migrate_flush_tlb_page(vma, addr);
+ return pte;
+}
+
+#endif
/*
* The Alpha doesn't have any external MMU info: the kernel page
* tables contain all the necessary information.
diff --git a/arch/alpha/include/asm/tlbflush.h b/arch/alpha/include/asm/tlbflush.h
index ba4b359d6c395..0c8529997f54e 100644
--- a/arch/alpha/include/asm/tlbflush.h
+++ b/arch/alpha/include/asm/tlbflush.h
@@ -58,7 +58,9 @@ flush_tlb_other(struct mm_struct *mm)
unsigned long *mmc = &mm->context[smp_processor_id()];
/* Check it's not zero first to avoid cacheline ping pong
when possible. */
- if (*mmc) *mmc = 0;
+
+ if (READ_ONCE(*mmc))
+ WRITE_ONCE(*mmc, 0);
}

#ifndef CONFIG_SMP
diff --git a/arch/alpha/mm/Makefile b/arch/alpha/mm/Makefile
index 101dbd06b4ceb..2d05664058f64 100644
--- a/arch/alpha/mm/Makefile
+++ b/arch/alpha/mm/Makefile
@@ -3,4 +3,4 @@
# Makefile for the linux alpha-specific parts of the memory manager.
#

-obj-y := init.o fault.o
+obj-y := init.o fault.o tlbflush.o
diff --git a/arch/alpha/mm/tlbflush.c b/arch/alpha/mm/tlbflush.c
new file mode 100644
index 0000000000000..ccbc317b9a348
--- /dev/null
+++ b/arch/alpha/mm/tlbflush.c
@@ -0,0 +1,112 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Alpha TLB shootdown helpers
+ *
+ * Copyright (C) 2025 Magnus Lindholm <linmag7@xxxxxxxxx>
+ *
+ * Alpha-specific TLB flush helpers that cannot be expressed purely
+ * as inline functions.
+ *
+ * These helpers provide combined MM context handling (ASN rollover)
+ * and immediate TLB invalidation for page migration and memory
+ * compaction paths, where lazy shootdowns are insufficient.
+ */
+
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/sched.h>
+#include <asm/tlbflush.h>
+#include <asm/pal.h>
+#include <asm/mmu_context.h>
+
+#define asn_locked() (cpu_data[smp_processor_id()].asn_lock)
+
+/*
+ * Migration/compaction helper: combine mm context (ASN) handling with an
+ * immediate per-page TLB invalidate and (for exec) an instruction barrier.
+ *
+ * This mirrors the SMP combined IPI handler semantics, but runs locally on UP.
+ */
+#ifndef CONFIG_SMP
+void migrate_flush_tlb_page(struct vm_area_struct *vma,
+ unsigned long addr)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ int tbi_type = (vma->vm_flags & VM_EXEC) ? 3 : 2;
+
+ /*
+ * First do the mm-context side:
+ * If we're currently running this mm, reload a fresh context ASN.
+ * Otherwise, mark context invalid.
+ *
+ * On UP, this is mostly about matching the SMP semantics and ensuring
+ * exec/i-cache tagging assumptions hold when compaction migrates pages.
+ */
+ if (mm == current->active_mm)
+ flush_tlb_current(mm);
+ else
+ flush_tlb_other(mm);
+
+ /*
+ * Then do the immediate translation kill for this VA.
+ * For exec mappings, order instruction fetch after invalidation.
+ */
+ tbi(tbi_type, addr);
+}
+
+#else
+struct tlb_mm_and_addr {
+ struct mm_struct *mm;
+ unsigned long addr;
+ int tbi_type; /* 2 = DTB, 3 = ITB+DTB */
+};
+
+static void ipi_flush_mm_and_page(void *x)
+{
+ struct tlb_mm_and_addr *d = x;
+
+ /* Part 1: mm context side (Alpha uses ASN/context as a key mechanism). */
+ if (d->mm == current->active_mm && !asn_locked())
+ __load_new_mm_context(d->mm);
+ else
+ flush_tlb_other(d->mm);
+
+ /* Part 2: immediate per-VA invalidation on this CPU. */
+ tbi(d->tbi_type, d->addr);
+}
+
+void migrate_flush_tlb_page(struct vm_area_struct *vma, unsigned long addr)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ struct tlb_mm_and_addr d = {
+ .mm = mm,
+ .addr = addr,
+ .tbi_type = (vma->vm_flags & VM_EXEC) ? 3 : 2,
+ };
+
+ /*
+ * One synchronous rendezvous: every CPU runs ipi_flush_mm_and_page().
+ * This is the "combined" version of flush_tlb_mm + per-page invalidate.
+ */
+ preempt_disable();
+ on_each_cpu(ipi_flush_mm_and_page, &d, 1);
+
+ /*
+ * mimic flush_tlb_mm()'s mm_users<=1 optimization.
+ */
+ if (atomic_read(&mm->mm_users) <= 1) {
+
+ int cpu, this_cpu;
+ this_cpu = smp_processor_id();
+
+ for (cpu = 0; cpu < NR_CPUS; cpu++) {
+ if (!cpu_online(cpu) || cpu == this_cpu)
+ continue;
+ if (READ_ONCE(mm->context[cpu]))
+ WRITE_ONCE(mm->context[cpu], 0);
+ }
+ }
+ preempt_enable();
+}
+
+#endif
diff --git a/arch/arm/boot/dts/allwinner/sun5i-a13-utoo-p66.dts b/arch/arm/boot/dts/allwinner/sun5i-a13-utoo-p66.dts
index be486d28d04fa..428cab5a0e906 100644
--- a/arch/arm/boot/dts/allwinner/sun5i-a13-utoo-p66.dts
+++ b/arch/arm/boot/dts/allwinner/sun5i-a13-utoo-p66.dts
@@ -102,6 +102,7 @@ &touchscreen {
/* The P66 uses a different EINT then the reference design */
interrupts = <6 9 IRQ_TYPE_EDGE_FALLING>; /* EINT9 (PG9) */
/* The icn8318 binding expects wake-gpios instead of power-gpios */
+ /delete-property/ power-gpios;
wake-gpios = <&pio 1 3 GPIO_ACTIVE_HIGH>; /* PB3 */
touchscreen-size-x = <800>;
touchscreen-size-y = <480>;
diff --git a/arch/arm/boot/dts/nxp/lpc/lpc32xx.dtsi b/arch/arm/boot/dts/nxp/lpc/lpc32xx.dtsi
index 974410918f35b..7503074d2877c 100644
--- a/arch/arm/boot/dts/nxp/lpc/lpc32xx.dtsi
+++ b/arch/arm/boot/dts/nxp/lpc/lpc32xx.dtsi
@@ -301,8 +301,9 @@ i2c2: i2c@400a8000 {
mpwm: mpwm@400e8000 {
compatible = "nxp,lpc3220-motor-pwm";
reg = <0x400e8000 0x78>;
+ clocks = <&clk LPC32XX_CLK_MCPWM>;
+ #pwm-cells = <3>;
status = "disabled";
- #pwm-cells = <2>;
};
};

diff --git a/arch/arm/kernel/vdso.c b/arch/arm/kernel/vdso.c
index d499ad461b004..6e5494bf5a24d 100644
--- a/arch/arm/kernel/vdso.c
+++ b/arch/arm/kernel/vdso.c
@@ -172,6 +172,7 @@ static void __init patch_vdso(void *ehdr)
vdso_nullpatch_one(&einfo, "__vdso_gettimeofday");
vdso_nullpatch_one(&einfo, "__vdso_clock_gettime");
vdso_nullpatch_one(&einfo, "__vdso_clock_gettime64");
+ vdso_nullpatch_one(&einfo, "__vdso_clock_getres");
}
}

diff --git a/arch/arm/mach-omap2/control.c b/arch/arm/mach-omap2/control.c
index 79860b23030de..eb6fc7c61b6e0 100644
--- a/arch/arm/mach-omap2/control.c
+++ b/arch/arm/mach-omap2/control.c
@@ -732,7 +732,7 @@ int __init omap2_control_base_init(void)
*/
int __init omap_control_init(void)
{
- struct device_node *np, *scm_conf;
+ struct device_node *np, *scm_conf, *clocks_node;
const struct of_device_id *match;
const struct omap_prcm_init_data *data;
int ret;
@@ -753,16 +753,19 @@ int __init omap_control_init(void)

if (IS_ERR(syscon)) {
ret = PTR_ERR(syscon);
- goto of_node_put;
+ goto err_put_scm_conf;
}

- if (of_get_child_by_name(scm_conf, "clocks")) {
+ clocks_node = of_get_child_by_name(scm_conf, "clocks");
+ if (clocks_node) {
+ of_node_put(clocks_node);
ret = omap2_clk_provider_init(scm_conf,
data->index,
syscon, NULL);
if (ret)
- goto of_node_put;
+ goto err_put_scm_conf;
}
+ of_node_put(scm_conf);
} else {
/* No scm_conf found, direct access */
ret = omap2_clk_provider_init(np, data->index, NULL,
@@ -780,6 +783,9 @@ int __init omap_control_init(void)

return 0;

+err_put_scm_conf:
+ if (scm_conf)
+ of_node_put(scm_conf);
of_node_put:
of_node_put(np);
return ret;
diff --git a/arch/arm/mm/physaddr.c b/arch/arm/mm/physaddr.c
index 3f263c840ebc4..1a37ebfacbba9 100644
--- a/arch/arm/mm/physaddr.c
+++ b/arch/arm/mm/physaddr.c
@@ -38,7 +38,7 @@ static inline bool __virt_addr_valid(unsigned long x)
phys_addr_t __virt_to_phys(unsigned long x)
{
WARN(!__virt_addr_valid(x),
- "virt_to_phys used for non-linear address: %pK (%pS)\n",
+ "virt_to_phys used for non-linear address: %px (%pS)\n",
(void *)x, (void *)x);

return __virt_to_phys_nodebug(x);
diff --git a/arch/arm64/Kbuild b/arch/arm64/Kbuild
index 5bfbf7d79c99b..d876bc0e54211 100644
--- a/arch/arm64/Kbuild
+++ b/arch/arm64/Kbuild
@@ -1,4 +1,8 @@
# SPDX-License-Identifier: GPL-2.0-only
+
+# Branch profiling isn't noinstr-safe
+subdir-ccflags-$(CONFIG_TRACE_BRANCH_PROFILING) += -DDISABLE_BRANCH_PROFILING
+
obj-y += kernel/ mm/ net/
obj-$(CONFIG_KVM) += kvm/
obj-$(CONFIG_XEN) += xen/
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 40ae4dd961b15..0e2902f38e70e 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -216,6 +216,7 @@ config ARM64
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_ERROR_INJECTION
+ select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_RETVAL
select HAVE_GCC_PLUGINS
diff --git a/arch/arm64/boot/dts/amlogic/amlogic-c3.dtsi b/arch/arm64/boot/dts/amlogic/amlogic-c3.dtsi
index d0cda759c25d0..edd92500ebc96 100644
--- a/arch/arm64/boot/dts/amlogic/amlogic-c3.dtsi
+++ b/arch/arm64/boot/dts/amlogic/amlogic-c3.dtsi
@@ -570,6 +570,10 @@ sdio: mmc@88000 {
no-sd;
resets = <&reset RESET_SD_EMMC_A>;
status = "disabled";
+
+ assigned-clocks = <&clkc_periphs CLKID_SD_EMMC_A>;
+ assigned-clock-rates = <24000000>;
+
};

sd: mmc@8a000 {
@@ -585,6 +589,9 @@ sd: mmc@8a000 {
no-sdio;
resets = <&reset RESET_SD_EMMC_B>;
status = "disabled";
+
+ assigned-clocks = <&clkc_periphs CLKID_SD_EMMC_B>;
+ assigned-clock-rates = <24000000>;
};

nand: nand-controller@8d000 {
diff --git a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
index e9b22868983db..4717c9666f2a5 100644
--- a/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-axg.dtsi
@@ -1923,6 +1923,9 @@ sd_emmc_b: mmc@5000 {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_B>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_B_CLK0>;
+ assigned-clock-rates = <24000000>;
};

sd_emmc_c: mmc@7000 {
@@ -1935,6 +1938,9 @@ sd_emmc_c: mmc@7000 {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_C>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_C_CLK0>;
+ assigned-clock-rates = <24000000>;
};

nfc: nand-controller@7800 {
diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
index d08c97797010d..e0ddb28a34c29 100644
--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
@@ -2408,6 +2408,9 @@ sd_emmc_a: mmc@ffe03000 {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_A>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_A_CLK0>;
+ assigned-clock-rates = <24000000>;
};

sd_emmc_b: mmc@ffe05000 {
@@ -2420,6 +2423,9 @@ sd_emmc_b: mmc@ffe05000 {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_B>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_B_CLK0>;
+ assigned-clock-rates = <24000000>;
};

sd_emmc_c: mmc@ffe07000 {
@@ -2432,6 +2438,9 @@ sd_emmc_c: mmc@ffe07000 {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_C>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_C_CLK0>;
+ assigned-clock-rates = <24000000>;
};

usb: usb@ffe09000 {
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
index ed00e67e6923a..851ae89dd17fa 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
@@ -799,6 +799,9 @@ &sd_emmc_a {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_A>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_A_CLK0>;
+ assigned-clock-rates = <24000000>;
};

&sd_emmc_b {
@@ -807,6 +810,9 @@ &sd_emmc_b {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_B>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_B_CLK0>;
+ assigned-clock-rates = <24000000>;
};

&sd_emmc_c {
@@ -815,6 +821,9 @@ &sd_emmc_c {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_C>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_C_CLK0>;
+ assigned-clock-rates = <24000000>;
};

&simplefb_hdmi {
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
index f58d1790de1cb..f7fafebafd809 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-gxl.dtsi
@@ -869,6 +869,9 @@ &sd_emmc_a {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_A>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_A_CLK0>;
+ assigned-clock-rates = <24000000>;
};

&sd_emmc_b {
@@ -877,6 +880,9 @@ &sd_emmc_b {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_B>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_B_CLK0>;
+ assigned-clock-rates = <24000000>;
};

&sd_emmc_c {
@@ -885,6 +891,9 @@ &sd_emmc_c {
<&clkc CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_C>;
+
+ assigned-clocks = <&clkc CLKID_SD_EMMC_C_CLK0>;
+ assigned-clock-rates = <24000000>;
};

&simplefb_hdmi {
diff --git a/arch/arm64/boot/dts/amlogic/meson-s4.dtsi b/arch/arm64/boot/dts/amlogic/meson-s4.dtsi
index 957577d986c06..7326aaa8d0ed7 100644
--- a/arch/arm64/boot/dts/amlogic/meson-s4.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-s4.dtsi
@@ -814,13 +814,16 @@ sdio: mmc@fe088000 {
reg = <0x0 0xfe088000 0x0 0x800>;
interrupts = <GIC_SPI 176 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clkc_periphs CLKID_SDEMMC_A>,
- <&xtal>,
+ <&clkc_periphs CLKID_SD_EMMC_A>,
<&clkc_pll CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_A>;
cap-sdio-irq;
keep-power-in-suspend;
status = "disabled";
+
+ assigned-clocks = <&clkc_periphs CLKID_SD_EMMC_A>;
+ assigned-clock-rates = <24000000>;
};

sd: mmc@fe08a000 {
@@ -833,6 +836,9 @@ sd: mmc@fe08a000 {
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_SD_EMMC_B>;
status = "disabled";
+
+ assigned-clocks = <&clkc_periphs CLKID_SD_EMMC_B>;
+ assigned-clock-rates = <24000000>;
};

emmc: mmc@fe08c000 {
@@ -840,13 +846,16 @@ emmc: mmc@fe08c000 {
reg = <0x0 0xfe08c000 0x0 0x800>;
interrupts = <GIC_SPI 178 IRQ_TYPE_EDGE_RISING>;
clocks = <&clkc_periphs CLKID_NAND>,
- <&xtal>,
+ <&clkc_periphs CLKID_SD_EMMC_C>,
<&clkc_pll CLKID_FCLK_DIV2>;
clock-names = "core", "clkin0", "clkin1";
resets = <&reset RESET_NAND_EMMC>;
no-sdio;
no-sd;
status = "disabled";
+
+ assigned-clocks = <&clkc_periphs CLKID_SD_EMMC_C>;
+ assigned-clock-rates = <24000000>;
};
};
};
diff --git a/arch/arm64/boot/dts/apple/t8112-j473.dts b/arch/arm64/boot/dts/apple/t8112-j473.dts
index 06fe257f08be4..4ae1ce919dafc 100644
--- a/arch/arm64/boot/dts/apple/t8112-j473.dts
+++ b/arch/arm64/boot/dts/apple/t8112-j473.dts
@@ -21,6 +21,25 @@ aliases {
};
};

+/*
+ * Keep the power-domains used for the HDMI port on.
+ */
+&framebuffer0 {
+ power-domains = <&ps_dispext_cpu0>, <&ps_dptx_ext_phy>;
+};
+
+/*
+ * The M2 Mac mini uses dispext for the HDMI output so it's not necessary to
+ * keep disp0 power-domains always-on.
+ */
+&ps_disp0_sys {
+ /delete-property/ apple,always-on;
+};
+
+&ps_disp0_fe {
+ /delete-property/ apple,always-on;
+};
+
/*
* Force the bus number assignments so that we can declare some of the
* on-board devices and properties that are populated by the bootloader
diff --git a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mp-ras314.dts b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mp-ras314.dts
index f7346b3d35fe5..a122f2ed5f531 100644
--- a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mp-ras314.dts
+++ b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mp-ras314.dts
@@ -704,7 +704,7 @@ pinctrl_hdmi: hdmigrp {
fsl,pins = <MX8MP_IOMUXC_HDMI_DDC_SCL__HDMIMIX_HDMI_SCL 0x400001c2>,
<MX8MP_IOMUXC_HDMI_DDC_SDA__HDMIMIX_HDMI_SDA 0x400001c2>,
<MX8MP_IOMUXC_HDMI_HPD__HDMIMIX_HDMI_HPD 0x40000010>,
- <MX8MP_IOMUXC_HDMI_CEC__HDMIMIX_HDMI_CEC 0x40000154>;
+ <MX8MP_IOMUXC_HDMI_CEC__HDMIMIX_HDMI_CEC 0x40000030>;
};

pinctrl_gpt1: gpt1grp {
diff --git a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
index e7c16a7ee6c26..773282002e6c6 100644
--- a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
+++ b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql-mba8mpxl.dts
@@ -859,7 +859,7 @@ pinctrl_hdmi: hdmigrp {
fsl,pins = <MX8MP_IOMUXC_HDMI_DDC_SCL__HDMIMIX_HDMI_SCL 0x400001c2>,
<MX8MP_IOMUXC_HDMI_DDC_SDA__HDMIMIX_HDMI_SDA 0x400001c2>,
<MX8MP_IOMUXC_HDMI_HPD__HDMIMIX_HDMI_HPD 0x40000010>,
- <MX8MP_IOMUXC_HDMI_CEC__HDMIMIX_HDMI_CEC 0x40000010>;
+ <MX8MP_IOMUXC_HDMI_CEC__HDMIMIX_HDMI_CEC 0x40000030>;
};

pinctrl_hoggpio2: hoggpio2grp {
diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-pico6.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-pico6.dts
index cce326aec1aa5..40af5656d6f15 100644
--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-pico6.dts
+++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-pico6.dts
@@ -91,7 +91,7 @@ bluetooth@2 {

&pio {
bt_pins_wakeup: bt-pins-wakeup {
- piins-bt-wakeup {
+ pins-bt-wakeup {
pinmux = <PINMUX_GPIO42__FUNC_GPIO42>;
input-enable;
};
diff --git a/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts b/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts
index 2e5b6b2c1f56b..ff2952fae35c9 100644
--- a/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts
+++ b/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts
@@ -1782,6 +1782,8 @@ usb2-0 {
status = "okay";
vbus-supply = <&usbc_vbus>;
mode = "otg";
+ usb-role-switch;
+ role-switch-default-mode = "host";
};

usb3-0 {
diff --git a/arch/arm64/boot/dts/qcom/msm8994-msft-lumia-octagon.dtsi b/arch/arm64/boot/dts/qcom/msm8994-msft-lumia-octagon.dtsi
index 10cd244dea4f7..ef8dfce8d9b2d 100644
--- a/arch/arm64/boot/dts/qcom/msm8994-msft-lumia-octagon.dtsi
+++ b/arch/arm64/boot/dts/qcom/msm8994-msft-lumia-octagon.dtsi
@@ -378,7 +378,7 @@ &blsp2_i2c1 {
status = "okay";

sideinteraction: touch@2c {
- compatible = "ad,ad7147_captouch";
+ compatible = "adi,ad7147_captouch";
reg = <0x2c>;

pinctrl-names = "default", "sleep";
diff --git a/arch/arm64/boot/dts/qcom/qcm2290.dtsi b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
index e75e6354b2d52..dbc78b2d5095a 100644
--- a/arch/arm64/boot/dts/qcom/qcm2290.dtsi
+++ b/arch/arm64/boot/dts/qcom/qcm2290.dtsi
@@ -1434,8 +1434,12 @@ usb_dwc3_ss: endpoint {

gpu: gpu@5900000 {
compatible = "qcom,adreno-07000200", "qcom,adreno";
- reg = <0x0 0x05900000 0x0 0x40000>;
- reg-names = "kgsl_3d0_reg_memory";
+ reg = <0x0 0x05900000 0x0 0x40000>,
+ <0x0 0x0599e000 0x0 0x1000>,
+ <0x0 0x05961000 0x0 0x800>;
+ reg-names = "kgsl_3d0_reg_memory",
+ "cx_mem",
+ "cx_dbgc";

interrupts = <GIC_SPI 177 IRQ_TYPE_LEVEL_HIGH>;

diff --git a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
index f99fb9159e0b6..547c6d4204601 100644
--- a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+++ b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
@@ -636,7 +636,7 @@ sdc2_card_det_n: sd-card-det-n-state {

&uart3 {
interrupts-extended = <&intc GIC_SPI 330 IRQ_TYPE_LEVEL_HIGH>,
- <&tlmm 11 IRQ_TYPE_LEVEL_HIGH>;
+ <&tlmm 11 IRQ_TYPE_EDGE_FALLING>;
pinctrl-0 = <&uart3_default>;
pinctrl-1 = <&uart3_sleep>;
pinctrl-names = "default", "sleep";
diff --git a/arch/arm64/boot/dts/qcom/sdm630.dtsi b/arch/arm64/boot/dts/qcom/sdm630.dtsi
index c8da5cb8d04e9..37692d438d230 100644
--- a/arch/arm64/boot/dts/qcom/sdm630.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm630.dtsi
@@ -591,8 +591,8 @@ qusb2_hstx_trim: hstx-trim@240 {
};

gpu_speed_bin: gpu-speed-bin@41a0 {
- reg = <0x41a2 0x1>;
- bits = <5 7>;
+ reg = <0x41a2 0x2>;
+ bits = <5 8>;
};
};

diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
index 9a6d3d0c0ee43..8f3b31a30e247 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
+++ b/arch/arm64/boot/dts/qcom/sdm845-db845c.dts
@@ -378,6 +378,12 @@ vreg_l21a_2p95: ldo21 {
regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
};

+ vreg_l23a_3p3: ldo23 {
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3312000>;
+ regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ };
+
vreg_l24a_3p075: ldo24 {
regulator-min-microvolt = <3088000>;
regulator-max-microvolt = <3088000>;
@@ -858,7 +864,6 @@ &spi0 {
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&qup_spi0_default>;
- cs-gpios = <&tlmm 3 GPIO_ACTIVE_LOW>;

can@0 {
compatible = "microchip,mcp2517fd";
@@ -1164,6 +1169,7 @@ &wifi {
vdd-1.8-xo-supply = <&vreg_l7a_1p8>;
vdd-1.3-rfa-supply = <&vreg_l17a_1p3>;
vdd-3.3-ch0-supply = <&vreg_l25a_3p3>;
+ vdd-3.3-ch1-supply = <&vreg_l23a_3p3>;

qcom,snoc-host-cap-8bit-quirk;
qcom,ath10k-calibration-variant = "Thundercomm_DB845C";
diff --git a/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi b/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
index d0cbf9106a792..934bf9cfc5ac7 100644
--- a/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
+++ b/arch/arm64/boot/dts/qcom/sdm845-oneplus-common.dtsi
@@ -156,7 +156,6 @@ ts_1p8_supply: ts-1p8-regulator {

gpio = <&tlmm 88 0>;
enable-active-high;
- regulator-boot-on;
};
};

@@ -247,6 +246,7 @@ vreg_l12a_1p8: ldo12 {
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
regulator-initial-mode = <RPMH_REGULATOR_MODE_HPM>;
+ regulator-boot-on;
};

vreg_l14a_1p88: ldo14 {
diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
index 4adadfd1e51ae..e33da2975240f 100644
--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
+++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
@@ -1688,8 +1688,12 @@ usb_dwc3_ss: endpoint {

gpu: gpu@5900000 {
compatible = "qcom,adreno-610.0", "qcom,adreno";
- reg = <0x0 0x05900000 0x0 0x40000>;
- reg-names = "kgsl_3d0_reg_memory";
+ reg = <0x0 0x05900000 0x0 0x40000>,
+ <0x0 0x0599e000 0x0 0x1000>,
+ <0x0 0x05961000 0x0 0x800>;
+ reg-names = "kgsl_3d0_reg_memory",
+ "cx_mem",
+ "cx_dbgc";

/* There's no (real) GMU, so we have to handle quite a bunch of clocks! */
clocks = <&gpucc GPU_CC_GX_GFX3D_CLK>,
diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
index 8536403e6ac99..6eef89ccdd121 100644
--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
@@ -736,8 +736,8 @@ soc: soc@0 {

#address-cells = <2>;
#size-cells = <2>;
- dma-ranges = <0 0 0 0 0x10 0>;
- ranges = <0 0 0 0 0x10 0>;
+ dma-ranges = <0 0 0 0 0x100 0>;
+ ranges = <0 0 0 0 0x100 0>;

gcc: clock-controller@100000 {
compatible = "qcom,x1e80100-gcc";
@@ -2679,7 +2679,7 @@ usb_1_ss1_qmpphy: phy@fda000 {
reg = <0 0x00fda000 0 0x4000>;

clocks = <&gcc GCC_USB3_SEC_PHY_AUX_CLK>,
- <&rpmhcc RPMH_CXO_CLK>,
+ <&tcsr TCSR_USB4_1_CLKREF_EN>,
<&gcc GCC_USB3_SEC_PHY_COM_AUX_CLK>,
<&gcc GCC_USB3_SEC_PHY_PIPE_CLK>;
clock-names = "aux",
@@ -2749,7 +2749,7 @@ usb_1_ss2_qmpphy: phy@fdf000 {
reg = <0 0x00fdf000 0 0x4000>;

clocks = <&gcc GCC_USB3_TERT_PHY_AUX_CLK>,
- <&rpmhcc RPMH_CXO_CLK>,
+ <&tcsr TCSR_USB4_2_CLKREF_EN>,
<&gcc GCC_USB3_TERT_PHY_COM_AUX_CLK>,
<&gcc GCC_USB3_TERT_PHY_PIPE_CLK>;
clock-names = "aux",
@@ -5034,9 +5034,11 @@ mdss_dp2_phy: phy@aec2a00 {
<0 0x0aec2000 0 0x1c8>;

clocks = <&dispcc DISP_CC_MDSS_DPTX2_AUX_CLK>,
- <&dispcc DISP_CC_MDSS_AHB_CLK>;
+ <&dispcc DISP_CC_MDSS_AHB_CLK>,
+ <&tcsr TCSR_EDP_CLKREF_EN>;
clock-names = "aux",
- "cfg_ahb";
+ "cfg_ahb",
+ "ref";

power-domains = <&rpmhpd RPMHPD_MX>;

@@ -5054,9 +5056,11 @@ mdss_dp3_phy: phy@aec5a00 {
<0 0x0aec5000 0 0x1c8>;

clocks = <&dispcc DISP_CC_MDSS_DPTX3_AUX_CLK>,
- <&dispcc DISP_CC_MDSS_AHB_CLK>;
+ <&dispcc DISP_CC_MDSS_AHB_CLK>,
+ <&tcsr TCSR_EDP_CLKREF_EN>;
clock-names = "aux",
- "cfg_ahb";
+ "cfg_ahb",
+ "ref";

power-domains = <&rpmhpd RPMHPD_MX>;

diff --git a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
index a7afc83d2f266..20c5adb51d6d8 100644
--- a/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
+++ b/arch/arm64/boot/dts/rockchip/rk3399-pinebook-pro.dts
@@ -428,10 +428,6 @@ &gpu {
status = "okay";
};

-&hdmi_sound {
- status = "okay";
-};
-
&i2c0 {
clock-frequency = <400000>;
i2c-scl-falling-time-ns = <4>;
diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
index 013c0d25d3481..079ccefeabe91 100644
--- a/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-j784s4-j742s2-main-common.dtsi
@@ -2350,42 +2350,6 @@ watchdog3: watchdog@2230000 {
assigned-clock-parents = <&k3_clks 351 4>;
};

- watchdog4: watchdog@2240000 {
- compatible = "ti,j7-rti-wdt";
- reg = <0x00 0x2240000 0x00 0x100>;
- clocks = <&k3_clks 352 0>;
- power-domains = <&k3_pds 352 TI_SCI_PD_EXCLUSIVE>;
- assigned-clocks = <&k3_clks 352 0>;
- assigned-clock-parents = <&k3_clks 352 4>;
- };
-
- watchdog5: watchdog@2250000 {
- compatible = "ti,j7-rti-wdt";
- reg = <0x00 0x2250000 0x00 0x100>;
- clocks = <&k3_clks 353 0>;
- power-domains = <&k3_pds 353 TI_SCI_PD_EXCLUSIVE>;
- assigned-clocks = <&k3_clks 353 0>;
- assigned-clock-parents = <&k3_clks 353 4>;
- };
-
- watchdog6: watchdog@2260000 {
- compatible = "ti,j7-rti-wdt";
- reg = <0x00 0x2260000 0x00 0x100>;
- clocks = <&k3_clks 354 0>;
- power-domains = <&k3_pds 354 TI_SCI_PD_EXCLUSIVE>;
- assigned-clocks = <&k3_clks 354 0>;
- assigned-clock-parents = <&k3_clks 354 4>;
- };
-
- watchdog7: watchdog@2270000 {
- compatible = "ti,j7-rti-wdt";
- reg = <0x00 0x2270000 0x00 0x100>;
- clocks = <&k3_clks 355 0>;
- power-domains = <&k3_pds 355 TI_SCI_PD_EXCLUSIVE>;
- assigned-clocks = <&k3_clks 355 0>;
- assigned-clock-parents = <&k3_clks 355 4>;
- };
-
/*
* The following RTI instances are coupled with MCU R5Fs, c7x and
* GPU so keeping them reserved as these will be used by their
diff --git a/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi b/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi
index 0160fe0da9838..78fcd0c40abcf 100644
--- a/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi
+++ b/arch/arm64/boot/dts/ti/k3-j784s4-main.dtsi
@@ -6,17 +6,40 @@
*/

&cbass_main {
- c71_3: dsp@67800000 {
- compatible = "ti,j721s2-c71-dsp";
- reg = <0x00 0x67800000 0x00 0x00080000>,
- <0x00 0x67e00000 0x00 0x0000c000>;
- reg-names = "l2sram", "l1dram";
- resets = <&k3_reset 40 1>;
- firmware-name = "j784s4-c71_3-fw";
- ti,sci = <&sms>;
- ti,sci-dev-id = <40>;
- ti,sci-proc-ids = <0x33 0xff>;
- status = "disabled";
+ watchdog4: watchdog@2240000 {
+ compatible = "ti,j7-rti-wdt";
+ reg = <0x00 0x2240000 0x00 0x100>;
+ clocks = <&k3_clks 352 0>;
+ power-domains = <&k3_pds 352 TI_SCI_PD_EXCLUSIVE>;
+ assigned-clocks = <&k3_clks 352 0>;
+ assigned-clock-parents = <&k3_clks 352 4>;
+ };
+
+ watchdog5: watchdog@2250000 {
+ compatible = "ti,j7-rti-wdt";
+ reg = <0x00 0x2250000 0x00 0x100>;
+ clocks = <&k3_clks 353 0>;
+ power-domains = <&k3_pds 353 TI_SCI_PD_EXCLUSIVE>;
+ assigned-clocks = <&k3_clks 353 0>;
+ assigned-clock-parents = <&k3_clks 353 4>;
+ };
+
+ watchdog6: watchdog@2260000 {
+ compatible = "ti,j7-rti-wdt";
+ reg = <0x00 0x2260000 0x00 0x100>;
+ clocks = <&k3_clks 354 0>;
+ power-domains = <&k3_pds 354 TI_SCI_PD_EXCLUSIVE>;
+ assigned-clocks = <&k3_clks 354 0>;
+ assigned-clock-parents = <&k3_clks 354 4>;
+ };
+
+ watchdog7: watchdog@2270000 {
+ compatible = "ti,j7-rti-wdt";
+ reg = <0x00 0x2270000 0x00 0x100>;
+ clocks = <&k3_clks 355 0>;
+ power-domains = <&k3_pds 355 TI_SCI_PD_EXCLUSIVE>;
+ assigned-clocks = <&k3_clks 355 0>;
+ assigned-clock-parents = <&k3_clks 355 4>;
};

pcie2_rc: pcie@2920000 {
@@ -113,6 +136,19 @@ serdes2: serdes@5020000 {
status = "disabled";
};
};
+
+ c71_3: dsp@67800000 {
+ compatible = "ti,j721s2-c71-dsp";
+ reg = <0x00 0x67800000 0x00 0x00080000>,
+ <0x00 0x67e00000 0x00 0x0000c000>;
+ reg-names = "l2sram", "l1dram";
+ resets = <&k3_reset 40 1>;
+ firmware-name = "j784s4-c71_3-fw";
+ ti,sci = <&sms>;
+ ti,sci-dev-id = <40>;
+ ti,sci-proc-ids = <0x33 0xff>;
+ status = "disabled";
+ };
};

&scm_conf {
diff --git a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
index e2ad5fb2cb0a4..0ff45540ca874 100644
--- a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
+++ b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi
@@ -187,11 +187,6 @@ psci {
};

firmware {
- optee: optee {
- compatible = "linaro,optee-tz";
- method = "smc";
- };
-
zynqmp_firmware: zynqmp-firmware {
compatible = "xlnx,zynqmp-firmware";
#power-domain-cells = <1>;
diff --git a/arch/arm64/include/asm/ftrace.h b/arch/arm64/include/asm/ftrace.h
index dc9cf0bd2a4cb..10e56522122aa 100644
--- a/arch/arm64/include/asm/ftrace.h
+++ b/arch/arm64/include/asm/ftrace.h
@@ -54,8 +54,11 @@ extern void return_to_handler(void);
unsigned long ftrace_call_adjust(unsigned long addr);

#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
+#define HAVE_ARCH_FTRACE_REGS
struct dyn_ftrace;
struct ftrace_ops;
+struct ftrace_regs;
+#define arch_ftrace_regs(fregs) ((struct __arch_ftrace_regs *)(fregs))

#define arch_ftrace_get_regs(regs) NULL

@@ -63,7 +66,7 @@ struct ftrace_ops;
* Note: sizeof(struct ftrace_regs) must be a multiple of 16 to ensure correct
* stack alignment
*/
-struct ftrace_regs {
+struct __arch_ftrace_regs {
/* x0 - x8 */
unsigned long regs[9];

@@ -83,49 +86,75 @@ struct ftrace_regs {
static __always_inline unsigned long
ftrace_regs_get_instruction_pointer(const struct ftrace_regs *fregs)
{
- return fregs->pc;
+ return arch_ftrace_regs(fregs)->pc;
}

static __always_inline void
ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
unsigned long pc)
{
- fregs->pc = pc;
+ arch_ftrace_regs(fregs)->pc = pc;
}

static __always_inline unsigned long
ftrace_regs_get_stack_pointer(const struct ftrace_regs *fregs)
{
- return fregs->sp;
+ return arch_ftrace_regs(fregs)->sp;
}

static __always_inline unsigned long
ftrace_regs_get_argument(struct ftrace_regs *fregs, unsigned int n)
{
if (n < 8)
- return fregs->regs[n];
+ return arch_ftrace_regs(fregs)->regs[n];
return 0;
}

static __always_inline unsigned long
ftrace_regs_get_return_value(const struct ftrace_regs *fregs)
{
- return fregs->regs[0];
+ return arch_ftrace_regs(fregs)->regs[0];
}

static __always_inline void
ftrace_regs_set_return_value(struct ftrace_regs *fregs,
unsigned long ret)
{
- fregs->regs[0] = ret;
+ arch_ftrace_regs(fregs)->regs[0] = ret;
}

static __always_inline void
ftrace_override_function_with_return(struct ftrace_regs *fregs)
{
- fregs->pc = fregs->lr;
+ arch_ftrace_regs(fregs)->pc = arch_ftrace_regs(fregs)->lr;
}

+static __always_inline unsigned long
+ftrace_regs_get_frame_pointer(const struct ftrace_regs *fregs)
+{
+ return arch_ftrace_regs(fregs)->fp;
+}
+
+static __always_inline struct pt_regs *
+ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs)
+{
+ struct __arch_ftrace_regs *afregs = arch_ftrace_regs(fregs);
+
+ memcpy(regs->regs, afregs->regs, sizeof(afregs->regs));
+ regs->sp = afregs->sp;
+ regs->pc = afregs->pc;
+ regs->regs[29] = afregs->fp;
+ regs->regs[30] = afregs->lr;
+ return regs;
+}
+
+#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
+ (_regs)->pc = arch_ftrace_regs(fregs)->pc; \
+ (_regs)->regs[29] = arch_ftrace_regs(fregs)->fp; \
+ (_regs)->sp = arch_ftrace_regs(fregs)->sp; \
+ (_regs)->pstate = PSR_MODE_EL1h; \
+ } while (0)
+
int ftrace_regs_query_register_offset(const char *name);

int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
@@ -143,7 +172,7 @@ static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs,
* The ftrace trampoline will return to this address instead of the
* instrumented function.
*/
- fregs->direct_tramp = addr;
+ arch_ftrace_regs(fregs)->direct_tramp = addr;
}
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */

@@ -183,23 +212,6 @@ static inline bool arch_syscall_match_sym_name(const char *sym,

#ifndef __ASSEMBLY__
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-struct fgraph_ret_regs {
- /* x0 - x7 */
- unsigned long regs[8];
-
- unsigned long fp;
- unsigned long __unused;
-};
-
-static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
-{
- return ret_regs->regs[0];
-}
-
-static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
-{
- return ret_regs->fp;
-}

void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
unsigned long frame_pointer);
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index eb57ddb5ecc53..2de5a70a47ce3 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -93,8 +93,6 @@ static inline pteval_t __phys_to_pte_val(phys_addr_t phys)
__pte(__phys_to_pte_val((phys_addr_t)(pfn) << PAGE_SHIFT) | pgprot_val(prot))

#define pte_none(pte) (!pte_val(pte))
-#define __pte_clear(mm, addr, ptep) \
- __set_pte(ptep, __pte(0))
#define pte_page(pte) (pfn_to_page(pte_pfn(pte)))

/*
@@ -1224,6 +1222,13 @@ static inline bool pud_user_accessible_page(pud_t pud)
/*
* Atomic pte/pmd modifications.
*/
+
+static inline void __pte_clear(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+{
+ __set_pte(ptep, __pte(0));
+}
+
static inline int __ptep_test_and_clear_young(struct vm_area_struct *vma,
unsigned long address,
pte_t *ptep)
diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h
index 56f7b1d4d54b9..bd5fc880b9090 100644
--- a/arch/arm64/include/asm/rwonce.h
+++ b/arch/arm64/include/asm/rwonce.h
@@ -62,7 +62,7 @@
default: \
atomic = 0; \
} \
- atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(__x))__x);\
+ atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(*__x) *)__x);\
})

#endif /* !BUILD_VDSO */
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 020e01181a0f1..eccbe4d725f3b 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -85,19 +85,19 @@ int main(void)
DEFINE(PT_REGS_SIZE, sizeof(struct pt_regs));
BLANK();
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
- DEFINE(FREGS_X0, offsetof(struct ftrace_regs, regs[0]));
- DEFINE(FREGS_X2, offsetof(struct ftrace_regs, regs[2]));
- DEFINE(FREGS_X4, offsetof(struct ftrace_regs, regs[4]));
- DEFINE(FREGS_X6, offsetof(struct ftrace_regs, regs[6]));
- DEFINE(FREGS_X8, offsetof(struct ftrace_regs, regs[8]));
- DEFINE(FREGS_FP, offsetof(struct ftrace_regs, fp));
- DEFINE(FREGS_LR, offsetof(struct ftrace_regs, lr));
- DEFINE(FREGS_SP, offsetof(struct ftrace_regs, sp));
- DEFINE(FREGS_PC, offsetof(struct ftrace_regs, pc));
+ DEFINE(FREGS_X0, offsetof(struct __arch_ftrace_regs, regs[0]));
+ DEFINE(FREGS_X2, offsetof(struct __arch_ftrace_regs, regs[2]));
+ DEFINE(FREGS_X4, offsetof(struct __arch_ftrace_regs, regs[4]));
+ DEFINE(FREGS_X6, offsetof(struct __arch_ftrace_regs, regs[6]));
+ DEFINE(FREGS_X8, offsetof(struct __arch_ftrace_regs, regs[8]));
+ DEFINE(FREGS_FP, offsetof(struct __arch_ftrace_regs, fp));
+ DEFINE(FREGS_LR, offsetof(struct __arch_ftrace_regs, lr));
+ DEFINE(FREGS_SP, offsetof(struct __arch_ftrace_regs, sp));
+ DEFINE(FREGS_PC, offsetof(struct __arch_ftrace_regs, pc));
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
- DEFINE(FREGS_DIRECT_TRAMP, offsetof(struct ftrace_regs, direct_tramp));
+ DEFINE(FREGS_DIRECT_TRAMP, offsetof(struct __arch_ftrace_regs, direct_tramp));
#endif
- DEFINE(FREGS_SIZE, sizeof(struct ftrace_regs));
+ DEFINE(FREGS_SIZE, sizeof(struct __arch_ftrace_regs));
BLANK();
#endif
#ifdef CONFIG_COMPAT
@@ -203,18 +203,6 @@ int main(void)
DEFINE(FTRACE_OPS_FUNC, offsetof(struct ftrace_ops, func));
#endif
BLANK();
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
- DEFINE(FGRET_REGS_X0, offsetof(struct fgraph_ret_regs, regs[0]));
- DEFINE(FGRET_REGS_X1, offsetof(struct fgraph_ret_regs, regs[1]));
- DEFINE(FGRET_REGS_X2, offsetof(struct fgraph_ret_regs, regs[2]));
- DEFINE(FGRET_REGS_X3, offsetof(struct fgraph_ret_regs, regs[3]));
- DEFINE(FGRET_REGS_X4, offsetof(struct fgraph_ret_regs, regs[4]));
- DEFINE(FGRET_REGS_X5, offsetof(struct fgraph_ret_regs, regs[5]));
- DEFINE(FGRET_REGS_X6, offsetof(struct fgraph_ret_regs, regs[6]));
- DEFINE(FGRET_REGS_X7, offsetof(struct fgraph_ret_regs, regs[7]));
- DEFINE(FGRET_REGS_FP, offsetof(struct fgraph_ret_regs, fp));
- DEFINE(FGRET_REGS_SIZE, sizeof(struct fgraph_ret_regs));
-#endif
#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
DEFINE(FTRACE_OPS_DIRECT_CALL, offsetof(struct ftrace_ops, direct_call));
#endif
diff --git a/arch/arm64/kernel/entry-ftrace.S b/arch/arm64/kernel/entry-ftrace.S
index f0c16640ef215..169ccf600066b 100644
--- a/arch/arm64/kernel/entry-ftrace.S
+++ b/arch/arm64/kernel/entry-ftrace.S
@@ -329,24 +329,28 @@ SYM_FUNC_END(ftrace_stub_graph)
* @fp is checked against the value passed by ftrace_graph_caller().
*/
SYM_CODE_START(return_to_handler)
- /* save return value regs */
- sub sp, sp, #FGRET_REGS_SIZE
- stp x0, x1, [sp, #FGRET_REGS_X0]
- stp x2, x3, [sp, #FGRET_REGS_X2]
- stp x4, x5, [sp, #FGRET_REGS_X4]
- stp x6, x7, [sp, #FGRET_REGS_X6]
- str x29, [sp, #FGRET_REGS_FP] // parent's fp
+ /* Make room for ftrace_regs */
+ sub sp, sp, #FREGS_SIZE
+
+ /* Save return value regs */
+ stp x0, x1, [sp, #FREGS_X0]
+ stp x2, x3, [sp, #FREGS_X2]
+ stp x4, x5, [sp, #FREGS_X4]
+ stp x6, x7, [sp, #FREGS_X6]
+
+ /* Save the callsite's FP */
+ str x29, [sp, #FREGS_FP]

mov x0, sp
- bl ftrace_return_to_handler // addr = ftrace_return_to_hander(regs);
+ bl ftrace_return_to_handler // addr = ftrace_return_to_hander(fregs);
mov x30, x0 // restore the original return address

- /* restore return value regs */
- ldp x0, x1, [sp, #FGRET_REGS_X0]
- ldp x2, x3, [sp, #FGRET_REGS_X2]
- ldp x4, x5, [sp, #FGRET_REGS_X4]
- ldp x6, x7, [sp, #FGRET_REGS_X6]
- add sp, sp, #FGRET_REGS_SIZE
+ /* Restore return value regs */
+ ldp x0, x1, [sp, #FREGS_X0]
+ ldp x2, x3, [sp, #FREGS_X2]
+ ldp x4, x5, [sp, #FREGS_X4]
+ ldp x6, x7, [sp, #FREGS_X6]
+ add sp, sp, #FREGS_SIZE

ret
SYM_CODE_END(return_to_handler)
diff --git a/arch/arm64/kernel/ftrace.c b/arch/arm64/kernel/ftrace.c
index b657f058bf4d5..06017bc1a555f 100644
--- a/arch/arm64/kernel/ftrace.c
+++ b/arch/arm64/kernel/ftrace.c
@@ -23,10 +23,10 @@ struct fregs_offset {
int offset;
};

-#define FREGS_OFFSET(n, field) \
-{ \
- .name = n, \
- .offset = offsetof(struct ftrace_regs, field), \
+#define FREGS_OFFSET(n, field) \
+{ \
+ .name = n, \
+ .offset = offsetof(struct __arch_ftrace_regs, field), \
}

static const struct fregs_offset fregs_offsets[] = {
@@ -488,7 +488,7 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
- prepare_ftrace_return(ip, &fregs->lr, fregs->fp);
+ prepare_ftrace_return(ip, &arch_ftrace_regs(fregs)->lr, arch_ftrace_regs(fregs)->fp);
}
#else
/*
diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
index 31eaf15d2079a..ea8ca0714156b 100644
--- a/arch/arm64/kernel/proton-pack.c
+++ b/arch/arm64/kernel/proton-pack.c
@@ -896,6 +896,7 @@ static u8 spectre_bhb_loop_affected(void)
MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
+ MIDR_ALL_VERSIONS(MIDR_HISI_TSV110),
{},
};
static const struct midr_range spectre_bhb_k24_list[] = {
diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
index 1a8f4284cb69a..ff1fe2d398692 100644
--- a/arch/arm64/kernel/ptrace.c
+++ b/arch/arm64/kernel/ptrace.c
@@ -1454,6 +1454,9 @@ static int poe_get(struct task_struct *target,
if (!system_supports_poe())
return -EINVAL;

+ if (target == current)
+ current->thread.por_el0 = read_sysreg_s(SYS_POR_EL0);
+
return membuf_write(&to, &target->thread.por_el0,
sizeof(target->thread.por_el0));
}
diff --git a/arch/arm64/lib/delay.c b/arch/arm64/lib/delay.c
index cb2062e7e2340..e278e060e78a9 100644
--- a/arch/arm64/lib/delay.c
+++ b/arch/arm64/lib/delay.c
@@ -23,9 +23,24 @@ static inline unsigned long xloops_to_cycles(unsigned long xloops)
return (xloops * loops_per_jiffy * HZ) >> 32;
}

+/*
+ * Force the use of CNTVCT_EL0 in order to have the same base as WFxT.
+ * This avoids some annoying issues when CNTVOFF_EL2 is not reset 0 on a
+ * KVM host running at EL1 until we do a vcpu_put() on the vcpu. When
+ * running at EL2, the effective offset is always 0.
+ *
+ * Note that userspace cannot change the offset behind our back either,
+ * as the vcpu mutex is held as long as KVM_RUN is in progress.
+ */
+static cycles_t notrace __delay_cycles(void)
+{
+ guard(preempt_notrace)();
+ return __arch_counter_get_cntvct_stable();
+}
+
void __delay(unsigned long cycles)
{
- cycles_t start = get_cycles();
+ cycles_t start = __delay_cycles();

if (alternative_has_cap_unlikely(ARM64_HAS_WFXT)) {
u64 end = start + cycles;
@@ -35,17 +50,17 @@ void __delay(unsigned long cycles)
* early, use a WFET loop to complete the delay.
*/
wfit(end);
- while ((get_cycles() - start) < cycles)
+ while ((__delay_cycles() - start) < cycles)
wfet(end);
} else if (arch_timer_evtstrm_available()) {
const cycles_t timer_evt_period =
USECS_TO_CYCLES(ARCH_TIMER_EVT_STREAM_PERIOD_US);

- while ((get_cycles() - start + timer_evt_period) < cycles)
+ while ((__delay_cycles() - start + timer_evt_period) < cycles)
wfe();
}

- while ((get_cycles() - start) < cycles)
+ while ((__delay_cycles() - start) < cycles)
cpu_relax();
}
EXPORT_SYMBOL(__delay);
diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
index 5f35a8bd8996e..d402fcdf08610 100644
--- a/arch/loongarch/Kconfig
+++ b/arch/loongarch/Kconfig
@@ -136,7 +136,7 @@ config LOONGARCH
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_ARG_ACCESS_API
select HAVE_FUNCTION_ERROR_INJECTION
- select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
+ select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS
diff --git a/arch/loongarch/include/asm/ftrace.h b/arch/loongarch/include/asm/ftrace.h
index c0a682808e070..ceb3e3d9c0d3d 100644
--- a/arch/loongarch/include/asm/ftrace.h
+++ b/arch/loongarch/include/asm/ftrace.h
@@ -44,39 +44,22 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent);
#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
struct ftrace_ops;

-struct ftrace_regs {
- struct pt_regs regs;
-};
+#include <linux/ftrace_regs.h>

static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
{
- return &fregs->regs;
-}
-
-static __always_inline unsigned long
-ftrace_regs_get_instruction_pointer(struct ftrace_regs *fregs)
-{
- return instruction_pointer(&fregs->regs);
+ return &arch_ftrace_regs(fregs)->regs;
}

static __always_inline void
ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs, unsigned long ip)
{
- instruction_pointer_set(&fregs->regs, ip);
+ instruction_pointer_set(&arch_ftrace_regs(fregs)->regs, ip);
}

-#define ftrace_regs_get_argument(fregs, n) \
- regs_get_kernel_argument(&(fregs)->regs, n)
-#define ftrace_regs_get_stack_pointer(fregs) \
- kernel_stack_pointer(&(fregs)->regs)
-#define ftrace_regs_return_value(fregs) \
- regs_return_value(&(fregs)->regs)
-#define ftrace_regs_set_return_value(fregs, ret) \
- regs_set_return_value(&(fregs)->regs, ret)
-#define ftrace_override_function_with_return(fregs) \
- override_function_with_return(&(fregs)->regs)
-#define ftrace_regs_query_register_offset(name) \
- regs_query_register_offset(name)
+#undef ftrace_regs_get_frame_pointer
+#define ftrace_regs_get_frame_pointer(fregs) \
+ (arch_ftrace_regs(fregs)->regs.regs[22])

#define ftrace_graph_func ftrace_graph_func
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
@@ -90,7 +73,7 @@ __arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr)
}

#define arch_ftrace_set_direct_caller(fregs, addr) \
- __arch_ftrace_set_direct_caller(&(fregs)->regs, addr)
+ __arch_ftrace_set_direct_caller(&arch_ftrace_regs(fregs)->regs, addr)
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */

#endif
@@ -99,26 +82,4 @@ __arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr)

#endif /* CONFIG_FUNCTION_TRACER */

-#ifndef __ASSEMBLY__
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-struct fgraph_ret_regs {
- /* a0 - a1 */
- unsigned long regs[2];
-
- unsigned long fp;
- unsigned long __unused;
-};
-
-static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
-{
- return ret_regs->regs[0];
-}
-
-static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
-{
- return ret_regs->fp;
-}
-#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
-#endif
-
#endif /* _ASM_LOONGARCH_FTRACE_H */
diff --git a/arch/loongarch/include/asm/topology.h b/arch/loongarch/include/asm/topology.h
index 50273c9187d02..23ff2718a0259 100644
--- a/arch/loongarch/include/asm/topology.h
+++ b/arch/loongarch/include/asm/topology.h
@@ -12,7 +12,7 @@

extern cpumask_t cpus_on_node[];

-#define cpumask_of_node(node) (&cpus_on_node[node])
+#define cpumask_of_node(node) ((node) == NUMA_NO_NODE ? cpu_all_mask : &cpus_on_node[node])

struct pci_bus;
extern int pcibus_to_node(struct pci_bus *);
diff --git a/arch/loongarch/kernel/asm-offsets.c b/arch/loongarch/kernel/asm-offsets.c
index d20d71d4bcae6..73954aa226646 100644
--- a/arch/loongarch/kernel/asm-offsets.c
+++ b/arch/loongarch/kernel/asm-offsets.c
@@ -281,18 +281,6 @@ static void __used output_pbe_defines(void)
}
#endif

-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-static void __used output_fgraph_ret_regs_defines(void)
-{
- COMMENT("LoongArch fgraph_ret_regs offsets.");
- OFFSET(FGRET_REGS_A0, fgraph_ret_regs, regs[0]);
- OFFSET(FGRET_REGS_A1, fgraph_ret_regs, regs[1]);
- OFFSET(FGRET_REGS_FP, fgraph_ret_regs, fp);
- DEFINE(FGRET_REGS_SIZE, sizeof(struct fgraph_ret_regs));
- BLANK();
-}
-#endif
-
static void __used output_kvm_defines(void)
{
COMMENT("KVM/LoongArch Specific offsets.");
diff --git a/arch/loongarch/kernel/ftrace_dyn.c b/arch/loongarch/kernel/ftrace_dyn.c
index bff058317062e..18056229e22e4 100644
--- a/arch/loongarch/kernel/ftrace_dyn.c
+++ b/arch/loongarch/kernel/ftrace_dyn.c
@@ -241,7 +241,7 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent)
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
- struct pt_regs *regs = &fregs->regs;
+ struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
unsigned long *parent = (unsigned long *)&regs->regs[1];

prepare_ftrace_return(ip, (unsigned long *)parent);
diff --git a/arch/loongarch/kernel/mcount.S b/arch/loongarch/kernel/mcount.S
index 3015896016a0b..b6850503e061b 100644
--- a/arch/loongarch/kernel/mcount.S
+++ b/arch/loongarch/kernel/mcount.S
@@ -79,10 +79,11 @@ SYM_FUNC_START(ftrace_graph_caller)
SYM_FUNC_END(ftrace_graph_caller)

SYM_FUNC_START(return_to_handler)
- PTR_ADDI sp, sp, -FGRET_REGS_SIZE
- PTR_S a0, sp, FGRET_REGS_A0
- PTR_S a1, sp, FGRET_REGS_A1
- PTR_S zero, sp, FGRET_REGS_FP
+ /* Save return value regs */
+ PTR_ADDI sp, sp, -PT_SIZE
+ PTR_S a0, sp, PT_R4
+ PTR_S a1, sp, PT_R5
+ PTR_S zero, sp, PT_R22

move a0, sp
bl ftrace_return_to_handler
@@ -90,9 +91,11 @@ SYM_FUNC_START(return_to_handler)
/* Restore the real parent address: a0 -> ra */
move ra, a0

- PTR_L a0, sp, FGRET_REGS_A0
- PTR_L a1, sp, FGRET_REGS_A1
- PTR_ADDI sp, sp, FGRET_REGS_SIZE
+ /* Restore return value regs */
+ PTR_L a0, sp, PT_R4
+ PTR_L a1, sp, PT_R5
+ PTR_ADDI sp, sp, PT_SIZE
+
jr ra
SYM_FUNC_END(return_to_handler)
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
diff --git a/arch/loongarch/kernel/mcount_dyn.S b/arch/loongarch/kernel/mcount_dyn.S
index 4e05adb405043..5729c20e5b8b0 100644
--- a/arch/loongarch/kernel/mcount_dyn.S
+++ b/arch/loongarch/kernel/mcount_dyn.S
@@ -144,19 +144,19 @@ SYM_CODE_END(ftrace_graph_caller)
SYM_CODE_START(return_to_handler)
UNWIND_HINT_UNDEFINED
/* Save return value regs */
- PTR_ADDI sp, sp, -FGRET_REGS_SIZE
- PTR_S a0, sp, FGRET_REGS_A0
- PTR_S a1, sp, FGRET_REGS_A1
- PTR_S zero, sp, FGRET_REGS_FP
+ PTR_ADDI sp, sp, -PT_SIZE
+ PTR_S a0, sp, PT_R4
+ PTR_S a1, sp, PT_R5
+ PTR_S zero, sp, PT_R22

move a0, sp
bl ftrace_return_to_handler
move ra, a0

/* Restore return value regs */
- PTR_L a0, sp, FGRET_REGS_A0
- PTR_L a1, sp, FGRET_REGS_A1
- PTR_ADDI sp, sp, FGRET_REGS_SIZE
+ PTR_L a0, sp, PT_R4
+ PTR_L a1, sp, PT_R5
+ PTR_ADDI sp, sp, PT_SIZE

jr ra
SYM_CODE_END(return_to_handler)
diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
index 3cfe591b66a4c..2c2586b6e08ce 100644
--- a/arch/loongarch/kernel/setup.c
+++ b/arch/loongarch/kernel/setup.c
@@ -421,6 +421,7 @@ static void __init arch_mem_init(char **cmdline_p)
PFN_UP(__pa_symbol(&__nosave_end)));

memblock_dump_all();
+ memblock_set_bottom_up(false);

early_memtest(PFN_PHYS(ARCH_PFN_OFFSET), PFN_PHYS(max_low_pfn));
}
diff --git a/arch/loongarch/kernel/unwind_orc.c b/arch/loongarch/kernel/unwind_orc.c
index b4b4ac8dbf417..471652c0c8653 100644
--- a/arch/loongarch/kernel/unwind_orc.c
+++ b/arch/loongarch/kernel/unwind_orc.c
@@ -507,7 +507,7 @@ bool unwind_next_frame(struct unwind_state *state)

state->pc = bt_address(pc);
if (!state->pc) {
- pr_err("cannot find unwind pc at %p\n", (void *)pc);
+ pr_err("cannot find unwind pc at %px\n", (void *)pc);
goto err;
}

diff --git a/arch/loongarch/kernel/unwind_prologue.c b/arch/loongarch/kernel/unwind_prologue.c
index 929ae240280a5..c9ee6892d81c7 100644
--- a/arch/loongarch/kernel/unwind_prologue.c
+++ b/arch/loongarch/kernel/unwind_prologue.c
@@ -64,7 +64,7 @@ static inline bool scan_handlers(unsigned long entry_offset)

static inline bool fix_exception(unsigned long pc)
{
-#ifdef CONFIG_NUMA
+#if defined(CONFIG_NUMA) && !defined(CONFIG_PREEMPT_RT)
int cpu;

for_each_possible_cpu(cpu) {
diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c
index 3b427b319db21..f46c15d6e7eae 100644
--- a/arch/loongarch/mm/tlb.c
+++ b/arch/loongarch/mm/tlb.c
@@ -202,7 +202,7 @@ void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t *ptep
local_irq_restore(flags);
}

-static void setup_ptwalker(void)
+static void __no_sanitize_address setup_ptwalker(void)
{
unsigned long pwctl0, pwctl1;
unsigned long pgd_i = 0, pgd_w = 0;
diff --git a/arch/m68k/lib/memmove.c b/arch/m68k/lib/memmove.c
index 6519f7f349f66..e33f00b02e4c0 100644
--- a/arch/m68k/lib/memmove.c
+++ b/arch/m68k/lib/memmove.c
@@ -24,6 +24,15 @@ void *memmove(void *dest, const void *src, size_t n)
src = csrc;
n--;
}
+#if defined(CONFIG_M68000)
+ if ((long)src & 1) {
+ char *cdest = dest;
+ const char *csrc = src;
+ for (; n; n--)
+ *cdest++ = *csrc++;
+ return xdest;
+ }
+#endif
if (n > 2 && (long)dest & 2) {
short *sdest = dest;
const short *ssrc = src;
@@ -66,6 +75,15 @@ void *memmove(void *dest, const void *src, size_t n)
src = csrc;
n--;
}
+#if defined(CONFIG_M68000)
+ if ((long)src & 1) {
+ char *cdest = dest;
+ const char *csrc = src;
+ for (; n; n--)
+ *--cdest = *--csrc;
+ return xdest;
+ }
+#endif
if (n > 2 && (long)dest & 2) {
short *sdest = dest;
const short *ssrc = src;
diff --git a/arch/mips/include/asm/mach-loongson64/topology.h b/arch/mips/include/asm/mach-loongson64/topology.h
index 3414a1fd17835..89bb4deab98a6 100644
--- a/arch/mips/include/asm/mach-loongson64/topology.h
+++ b/arch/mips/include/asm/mach-loongson64/topology.h
@@ -7,7 +7,7 @@
#define cpu_to_node(cpu) (cpu_logical_map(cpu) >> 2)

extern cpumask_t __node_cpumask[];
-#define cpumask_of_node(node) (&__node_cpumask[node])
+#define cpumask_of_node(node) ((node) == NUMA_NO_NODE ? cpu_all_mask : &__node_cpumask[node])

struct pci_bus;
extern int pcibus_to_node(struct pci_bus *);
diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
index cda7983e7c18d..ba3b82f0341a9 100644
--- a/arch/mips/kernel/relocate.c
+++ b/arch/mips/kernel/relocate.c
@@ -420,7 +420,20 @@ void *__init relocate_kernel(void)
goto out;

/* The current thread is now within the relocated image */
+#ifndef CONFIG_CC_IS_CLANG
__current_thread_info = RELOCATED(&init_thread_union);
+#else
+ /*
+ * LLVM may wrongly restore $gp ($28) in epilog even if it's
+ * intentionally modified. Work around this by using inline
+ * assembly to assign $gp. $gp couldn't be listed as output or
+ * clobber, or LLVM will still restore its original value.
+ * See also LLVM upstream issue
+ * https://github.com/llvm/llvm-project/issues/176546
+ */
+ asm volatile("move $28, %0" : :
+ "r" (RELOCATED(&init_thread_union)));
+#endif

/* Return the new kernel's entry point */
kernel_entry = RELOCATED(start_kernel);
diff --git a/arch/mips/rb532/devices.c b/arch/mips/rb532/devices.c
index b7f6f782d9a13..ffa4d38ca95df 100644
--- a/arch/mips/rb532/devices.c
+++ b/arch/mips/rb532/devices.c
@@ -212,11 +212,12 @@ static struct platform_device rb532_wdt = {
static struct plat_serial8250_port rb532_uart_res[] = {
{
.type = PORT_16550A,
- .membase = (char *)KSEG1ADDR(REGBASE + UART0BASE),
+ .mapbase = REGBASE + UART0BASE,
+ .mapsize = 0x1000,
.irq = UART0_IRQ,
.regshift = 2,
.iotype = UPIO_MEM,
- .flags = UPF_BOOT_AUTOCONF,
+ .flags = UPF_BOOT_AUTOCONF | UPF_IOREMAP,
},
{
.flags = 0,
diff --git a/arch/openrisc/include/asm/barrier.h b/arch/openrisc/include/asm/barrier.h
index 7538294721bed..8e592c9909023 100644
--- a/arch/openrisc/include/asm/barrier.h
+++ b/arch/openrisc/include/asm/barrier.h
@@ -4,6 +4,8 @@

#define mb() asm volatile ("l.msync" ::: "memory")

+#define nop() asm volatile ("l.nop")
+
#include <asm-generic/barrier.h>

#endif /* __ASM_BARRIER_H */
diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
index 1e793f770f719..5dd7f862db2bd 100644
--- a/arch/parisc/kernel/drivers.c
+++ b/arch/parisc/kernel/drivers.c
@@ -435,7 +435,7 @@ static struct parisc_device * __init create_tree_node(char id,
dev->dev.dma_mask = &dev->dma_mask;
dev->dev.coherent_dma_mask = dev->dma_mask;
if (device_register(&dev->dev)) {
- kfree(dev);
+ put_device(&dev->dev);
return NULL;
}

diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
index ed93bd8c15453..d7b0f233f2cc9 100644
--- a/arch/parisc/kernel/process.c
+++ b/arch/parisc/kernel/process.c
@@ -85,6 +85,9 @@ void machine_restart(char *cmd)
#endif
/* set up a new led state on systems shipped with a LED State panel */
pdc_chassis_send_status(PDC_CHASSIS_DIRECT_SHUTDOWN);
+
+ /* prevent interrupts during reboot */
+ set_eiem(0);

/* "Normal" system reset */
pdc_do_reset();
diff --git a/arch/powerpc/include/asm/eeh.h b/arch/powerpc/include/asm/eeh.h
index 5e34611de9ef4..b7ebb4ac2c710 100644
--- a/arch/powerpc/include/asm/eeh.h
+++ b/arch/powerpc/include/asm/eeh.h
@@ -289,6 +289,8 @@ void eeh_pe_dev_traverse(struct eeh_pe *root,
void eeh_pe_restore_bars(struct eeh_pe *pe);
const char *eeh_pe_loc_get(struct eeh_pe *pe);
struct pci_bus *eeh_pe_bus_get(struct eeh_pe *pe);
+const char *eeh_pe_loc_get_bus(struct pci_bus *bus);
+struct pci_bus *eeh_pe_bus_get_nolock(struct eeh_pe *pe);

void eeh_show_enabled(void);
int __init eeh_init(struct eeh_ops *ops);
diff --git a/arch/powerpc/include/asm/ftrace.h b/arch/powerpc/include/asm/ftrace.h
index 559560286e6d0..407ce6eccc04f 100644
--- a/arch/powerpc/include/asm/ftrace.h
+++ b/arch/powerpc/include/asm/ftrace.h
@@ -32,42 +32,28 @@ struct dyn_arch_ftrace {
int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
#define ftrace_init_nop ftrace_init_nop

-struct ftrace_regs {
- struct pt_regs regs;
-};
+#include <linux/ftrace_regs.h>

static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
{
/* We clear regs.msr in ftrace_call */
- return fregs->regs.msr ? &fregs->regs : NULL;
+ return arch_ftrace_regs(fregs)->regs.msr ? &arch_ftrace_regs(fregs)->regs : NULL;
}

+#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
+ (_regs)->result = 0; \
+ (_regs)->nip = arch_ftrace_regs(fregs)->regs.nip; \
+ (_regs)->gpr[1] = arch_ftrace_regs(fregs)->regs.gpr[1]; \
+ asm volatile("mfmsr %0" : "=r" ((_regs)->msr)); \
+ } while (0)
+
static __always_inline void
ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
unsigned long ip)
{
- regs_set_return_ip(&fregs->regs, ip);
-}
-
-static __always_inline unsigned long
-ftrace_regs_get_instruction_pointer(struct ftrace_regs *fregs)
-{
- return instruction_pointer(&fregs->regs);
+ regs_set_return_ip(&arch_ftrace_regs(fregs)->regs, ip);
}

-#define ftrace_regs_get_argument(fregs, n) \
- regs_get_kernel_argument(&(fregs)->regs, n)
-#define ftrace_regs_get_stack_pointer(fregs) \
- kernel_stack_pointer(&(fregs)->regs)
-#define ftrace_regs_return_value(fregs) \
- regs_return_value(&(fregs)->regs)
-#define ftrace_regs_set_return_value(fregs, ret) \
- regs_set_return_value(&(fregs)->regs, ret)
-#define ftrace_override_function_with_return(fregs) \
- override_function_with_return(&(fregs)->regs)
-#define ftrace_regs_query_register_offset(name) \
- regs_query_register_offset(name)
-
struct ftrace_ops;

#define ftrace_graph_func ftrace_graph_func
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 2bb03d941e3e8..6737416dde9f0 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -134,7 +134,6 @@ static __always_inline void kuap_assert_locked(void)

static __always_inline void allow_read_from_user(const void __user *from, unsigned long size)
{
- barrier_nospec();
allow_user_access(NULL, from, size, KUAP_READ);
}

@@ -146,7 +145,6 @@ static __always_inline void allow_write_to_user(void __user *to, unsigned long s
static __always_inline void allow_read_write_user(void __user *to, const void __user *from,
unsigned long size)
{
- barrier_nospec();
allow_user_access(to, from, size, KUAP_READ_WRITE);
}

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index 4f5a46a77fa2b..3987a5c33558b 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -301,6 +301,7 @@ do { \
__typeof__(sizeof(*(ptr))) __gu_size = sizeof(*(ptr)); \
\
might_fault(); \
+ barrier_nospec(); \
allow_read_from_user(__gu_addr, __gu_size); \
__get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \
prevent_read_from_user(__gu_addr, __gu_size); \
@@ -329,6 +330,7 @@ raw_copy_in_user(void __user *to, const void __user *from, unsigned long n)
{
unsigned long ret;

+ barrier_nospec();
allow_read_write_user(to, from, n);
ret = __copy_tofrom_user(to, from, n);
prevent_read_write_user(to, from, n);
@@ -415,6 +417,7 @@ static __must_check __always_inline bool user_access_begin(const void __user *pt

might_fault();

+ barrier_nospec();
allow_read_write_user((void __user *)ptr, ptr, len);
return true;
}
@@ -431,6 +434,7 @@ user_read_access_begin(const void __user *ptr, size_t len)

might_fault();

+ barrier_nospec();
allow_read_from_user(ptr, len);
return true;
}
diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
index c73e4225e84a5..51a6d881c2292 100644
--- a/arch/powerpc/kernel/eeh_driver.c
+++ b/arch/powerpc/kernel/eeh_driver.c
@@ -846,7 +846,7 @@ void eeh_handle_normal_event(struct eeh_pe *pe)

pci_lock_rescan_remove();

- bus = eeh_pe_bus_get(pe);
+ bus = eeh_pe_bus_get_nolock(pe);
if (!bus) {
pr_err("%s: Cannot find PCI bus for PHB#%x-PE#%x\n",
__func__, pe->phb->global_number, pe->addr);
@@ -886,14 +886,15 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
/* Log the event */
if (pe->type & EEH_PE_PHB) {
pr_err("EEH: Recovering PHB#%x, location: %s\n",
- pe->phb->global_number, eeh_pe_loc_get(pe));
+ pe->phb->global_number, eeh_pe_loc_get_bus(bus));
} else {
struct eeh_pe *phb_pe = eeh_phb_pe_get(pe->phb);

pr_err("EEH: Recovering PHB#%x-PE#%x\n",
pe->phb->global_number, pe->addr);
pr_err("EEH: PE location: %s, PHB location: %s\n",
- eeh_pe_loc_get(pe), eeh_pe_loc_get(phb_pe));
+ eeh_pe_loc_get_bus(bus),
+ eeh_pe_loc_get_bus(eeh_pe_bus_get_nolock(phb_pe)));
}

#ifdef CONFIG_STACKTRACE
@@ -1098,7 +1099,7 @@ void eeh_handle_normal_event(struct eeh_pe *pe)
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS, true);
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);

- bus = eeh_pe_bus_get(pe);
+ bus = eeh_pe_bus_get_nolock(pe);
if (bus)
pci_hp_remove_devices(bus);
else
@@ -1222,7 +1223,7 @@ void eeh_handle_special_event(void)
(phb_pe->state & EEH_PE_RECOVERING))
continue;

- bus = eeh_pe_bus_get(phb_pe);
+ bus = eeh_pe_bus_get_nolock(phb_pe);
if (!bus) {
pr_err("%s: Cannot find PCI bus for "
"PHB#%x-PE#%x\n",
diff --git a/arch/powerpc/kernel/eeh_pe.c b/arch/powerpc/kernel/eeh_pe.c
index e740101fadf3b..040e8f69a4aa8 100644
--- a/arch/powerpc/kernel/eeh_pe.c
+++ b/arch/powerpc/kernel/eeh_pe.c
@@ -812,6 +812,24 @@ void eeh_pe_restore_bars(struct eeh_pe *pe)
const char *eeh_pe_loc_get(struct eeh_pe *pe)
{
struct pci_bus *bus = eeh_pe_bus_get(pe);
+ return eeh_pe_loc_get_bus(bus);
+}
+
+/**
+ * eeh_pe_loc_get_bus - Retrieve location code binding to the given PCI bus
+ * @bus: PCI bus
+ *
+ * Retrieve the location code associated with the given PCI bus. If the bus
+ * is a root bus, the location code is fetched from the PHB device tree node
+ * or root port. Otherwise, the location code is obtained from the device
+ * tree node of the upstream bridge of the bus. The function walks up the
+ * bus hierarchy if necessary, checking each node for the appropriate
+ * location code property ("ibm,io-base-loc-code" for root buses,
+ * "ibm,slot-location-code" for others). If no location code is found,
+ * returns "N/A".
+ */
+const char *eeh_pe_loc_get_bus(struct pci_bus *bus)
+{
struct device_node *dn;
const char *loc = NULL;

@@ -838,8 +856,9 @@ const char *eeh_pe_loc_get(struct eeh_pe *pe)
}

/**
- * eeh_pe_bus_get - Retrieve PCI bus according to the given PE
+ * _eeh_pe_bus_get - Retrieve PCI bus according to the given PE
* @pe: EEH PE
+ * @do_lock: Is the caller already held the pci_lock_rescan_remove?
*
* Retrieve the PCI bus according to the given PE. Basically,
* there're 3 types of PEs: PHB/Bus/Device. For PHB PE, the
@@ -847,7 +866,7 @@ const char *eeh_pe_loc_get(struct eeh_pe *pe)
* returned for BUS PE. However, we don't have associated PCI
* bus for DEVICE PE.
*/
-struct pci_bus *eeh_pe_bus_get(struct eeh_pe *pe)
+static struct pci_bus *_eeh_pe_bus_get(struct eeh_pe *pe, bool do_lock)
{
struct eeh_dev *edev;
struct pci_dev *pdev;
@@ -862,11 +881,58 @@ struct pci_bus *eeh_pe_bus_get(struct eeh_pe *pe)

/* Retrieve the parent PCI bus of first (top) PCI device */
edev = list_first_entry_or_null(&pe->edevs, struct eeh_dev, entry);
- pci_lock_rescan_remove();
+ if (do_lock)
+ pci_lock_rescan_remove();
pdev = eeh_dev_to_pci_dev(edev);
if (pdev)
bus = pdev->bus;
- pci_unlock_rescan_remove();
+ if (do_lock)
+ pci_unlock_rescan_remove();

return bus;
}
+
+/**
+ * eeh_pe_bus_get - Retrieve PCI bus associated with the given EEH PE, locking
+ * if needed
+ * @pe: Pointer to the EEH PE
+ *
+ * This function is a wrapper around _eeh_pe_bus_get(), which retrieves the PCI
+ * bus associated with the provided EEH PE structure. It acquires the PCI
+ * rescans lock to ensure safe access to shared data during the retrieval
+ * process. This function should be used when the caller requires the PCI bus
+ * while holding the rescan/remove lock, typically during operations that modify
+ * or inspect PCIe device state in a safe manner.
+ *
+ * RETURNS:
+ * A pointer to the PCI bus associated with the EEH PE, or NULL if none found.
+ */
+
+struct pci_bus *eeh_pe_bus_get(struct eeh_pe *pe)
+{
+ return _eeh_pe_bus_get(pe, true);
+}
+
+/**
+ * eeh_pe_bus_get_nolock - Retrieve PCI bus associated with the given EEH PE
+ * without locking
+ * @pe: Pointer to the EEH PE
+ *
+ * This function is a variant of _eeh_pe_bus_get() that retrieves the PCI bus
+ * associated with the specified EEH PE without acquiring the
+ * pci_lock_rescan_remove lock. It should only be used when the caller can
+ * guarantee safe access to PE structures without the need for that lock,
+ * typically in contexts where the lock is already held locking is otherwise
+ * managed.
+ *
+ * RETURNS:
+ * pointer to the PCI bus associated with the EEH PE, or NULL if none is found.
+ *
+ * NOTE:
+ * Use this function carefully to avoid race conditions and data corruption.
+ */
+
+struct pci_bus *eeh_pe_bus_get_nolock(struct eeh_pe *pe)
+{
+ return _eeh_pe_bus_get(pe, false);
+}
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 4ab9b8cee77a1..09b2ccf387118 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -821,6 +821,8 @@ static int parse_thread_groups(struct device_node *dn,

count = of_property_count_u32_elems(dn, "ibm,thread-groups");
thread_group_array = kcalloc(count, sizeof(u32), GFP_KERNEL);
+ if (!thread_group_array)
+ return -ENOMEM;
ret = of_property_read_u32_array(dn, "ibm,thread-groups",
thread_group_array, count);
if (ret)
diff --git a/arch/powerpc/kernel/trace/ftrace.c b/arch/powerpc/kernel/trace/ftrace.c
index d8d6b4fd9a14c..df41f4a7c738b 100644
--- a/arch/powerpc/kernel/trace/ftrace.c
+++ b/arch/powerpc/kernel/trace/ftrace.c
@@ -421,7 +421,7 @@ int __init ftrace_dyn_arch_init(void)
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
- unsigned long sp = fregs->regs.gpr[1];
+ unsigned long sp = arch_ftrace_regs(fregs)->regs.gpr[1];
int bit;

if (unlikely(ftrace_graph_is_dead()))
@@ -439,6 +439,6 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,

ftrace_test_recursion_unlock(bit);
out:
- fregs->regs.link = parent_ip;
+ arch_ftrace_regs(fregs)->regs.link = parent_ip;
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
diff --git a/arch/powerpc/kernel/trace/ftrace_64_pg.c b/arch/powerpc/kernel/trace/ftrace_64_pg.c
index 12fab1803bcf4..d3c5552e4984d 100644
--- a/arch/powerpc/kernel/trace/ftrace_64_pg.c
+++ b/arch/powerpc/kernel/trace/ftrace_64_pg.c
@@ -829,7 +829,7 @@ __prepare_ftrace_return(unsigned long parent, unsigned long ip, unsigned long sp
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
- fregs->regs.link = __prepare_ftrace_return(parent_ip, ip, fregs->regs.gpr[1]);
+ arch_ftrace_regs(fregs)->regs.link = __prepare_ftrace_return(parent_ip, ip, arch_ftrace_regs(fregs)->regs.gpr[1]);
}
#else
unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip,
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d160c3b830266..9e8667a523d55 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -144,7 +144,7 @@ config RISCV
select HAVE_DYNAMIC_FTRACE_WITH_ARGS if HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL
select HAVE_FUNCTION_GRAPH_TRACER
- select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
+ select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_TRACER if !XIP_KERNEL && !PREEMPTION
select HAVE_EBPF_JIT if MMU
select HAVE_GUP_FAST if MMU
diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
index f253c8dae878e..6c3ae13d1514c 100644
--- a/arch/riscv/include/asm/ftrace.h
+++ b/arch/riscv/include/asm/ftrace.h
@@ -125,8 +125,12 @@ int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);

#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
#define arch_ftrace_get_regs(regs) NULL
+#define HAVE_ARCH_FTRACE_REGS
struct ftrace_ops;
-struct ftrace_regs {
+struct ftrace_regs;
+#define arch_ftrace_regs(fregs) ((struct __arch_ftrace_regs *)(fregs))
+
+struct __arch_ftrace_regs {
unsigned long epc;
unsigned long ra;
unsigned long sp;
@@ -150,42 +154,61 @@ struct ftrace_regs {
static __always_inline unsigned long ftrace_regs_get_instruction_pointer(const struct ftrace_regs
*fregs)
{
- return fregs->epc;
+ return arch_ftrace_regs(fregs)->epc;
}

static __always_inline void ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
unsigned long pc)
{
- fregs->epc = pc;
+ arch_ftrace_regs(fregs)->epc = pc;
}

static __always_inline unsigned long ftrace_regs_get_stack_pointer(const struct ftrace_regs *fregs)
{
- return fregs->sp;
+ return arch_ftrace_regs(fregs)->sp;
+}
+
+static __always_inline unsigned long ftrace_regs_get_frame_pointer(const struct ftrace_regs *fregs)
+{
+ return arch_ftrace_regs(fregs)->s0;
}

static __always_inline unsigned long ftrace_regs_get_argument(struct ftrace_regs *fregs,
unsigned int n)
{
if (n < 8)
- return fregs->args[n];
+ return arch_ftrace_regs(fregs)->args[n];
return 0;
}

static __always_inline unsigned long ftrace_regs_get_return_value(const struct ftrace_regs *fregs)
{
- return fregs->a0;
+ return arch_ftrace_regs(fregs)->a0;
}

static __always_inline void ftrace_regs_set_return_value(struct ftrace_regs *fregs,
unsigned long ret)
{
- fregs->a0 = ret;
+ arch_ftrace_regs(fregs)->a0 = ret;
}

static __always_inline void ftrace_override_function_with_return(struct ftrace_regs *fregs)
{
- fregs->epc = fregs->ra;
+ arch_ftrace_regs(fregs)->epc = arch_ftrace_regs(fregs)->ra;
+}
+
+static __always_inline struct pt_regs *
+ftrace_partial_regs(const struct ftrace_regs *fregs, struct pt_regs *regs)
+{
+ struct __arch_ftrace_regs *afregs = arch_ftrace_regs(fregs);
+
+ memcpy(&regs->a0, afregs->args, sizeof(afregs->args));
+ regs->epc = afregs->epc;
+ regs->ra = afregs->ra;
+ regs->sp = afregs->sp;
+ regs->s0 = afregs->s0;
+ regs->t1 = afregs->t1;
+ return regs;
}

int ftrace_regs_query_register_offset(const char *name);
@@ -196,7 +219,7 @@ void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,

static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsigned long addr)
{
- fregs->t1 = addr;
+ arch_ftrace_regs(fregs)->t1 = addr;
}
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */

@@ -204,25 +227,4 @@ static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsi

#endif /* CONFIG_DYNAMIC_FTRACE */

-#ifndef __ASSEMBLY__
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-struct fgraph_ret_regs {
- unsigned long a1;
- unsigned long a0;
- unsigned long s0;
- unsigned long ra;
-};
-
-static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
-{
- return ret_regs->a0;
-}
-
-static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
-{
- return ret_regs->s0;
-}
-#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
-#endif
-
#endif /* _ASM_RISCV_FTRACE_H */
diff --git a/arch/riscv/kernel/asm-offsets.c b/arch/riscv/kernel/asm-offsets.c
index 05c6152a65310..dfed9986b45ed 100644
--- a/arch/riscv/kernel/asm-offsets.c
+++ b/arch/riscv/kernel/asm-offsets.c
@@ -497,19 +497,19 @@ void asm_offsets(void)
OFFSET(STACKFRAME_RA, stackframe, ra);

#ifdef CONFIG_DYNAMIC_FTRACE_WITH_ARGS
- DEFINE(FREGS_SIZE_ON_STACK, ALIGN(sizeof(struct ftrace_regs), STACK_ALIGN));
- DEFINE(FREGS_EPC, offsetof(struct ftrace_regs, epc));
- DEFINE(FREGS_RA, offsetof(struct ftrace_regs, ra));
- DEFINE(FREGS_SP, offsetof(struct ftrace_regs, sp));
- DEFINE(FREGS_S0, offsetof(struct ftrace_regs, s0));
- DEFINE(FREGS_T1, offsetof(struct ftrace_regs, t1));
- DEFINE(FREGS_A0, offsetof(struct ftrace_regs, a0));
- DEFINE(FREGS_A1, offsetof(struct ftrace_regs, a1));
- DEFINE(FREGS_A2, offsetof(struct ftrace_regs, a2));
- DEFINE(FREGS_A3, offsetof(struct ftrace_regs, a3));
- DEFINE(FREGS_A4, offsetof(struct ftrace_regs, a4));
- DEFINE(FREGS_A5, offsetof(struct ftrace_regs, a5));
- DEFINE(FREGS_A6, offsetof(struct ftrace_regs, a6));
- DEFINE(FREGS_A7, offsetof(struct ftrace_regs, a7));
+ DEFINE(FREGS_SIZE_ON_STACK, ALIGN(sizeof(struct __arch_ftrace_regs), STACK_ALIGN));
+ DEFINE(FREGS_EPC, offsetof(struct __arch_ftrace_regs, epc));
+ DEFINE(FREGS_RA, offsetof(struct __arch_ftrace_regs, ra));
+ DEFINE(FREGS_SP, offsetof(struct __arch_ftrace_regs, sp));
+ DEFINE(FREGS_S0, offsetof(struct __arch_ftrace_regs, s0));
+ DEFINE(FREGS_T1, offsetof(struct __arch_ftrace_regs, t1));
+ DEFINE(FREGS_A0, offsetof(struct __arch_ftrace_regs, a0));
+ DEFINE(FREGS_A1, offsetof(struct __arch_ftrace_regs, a1));
+ DEFINE(FREGS_A2, offsetof(struct __arch_ftrace_regs, a2));
+ DEFINE(FREGS_A3, offsetof(struct __arch_ftrace_regs, a3));
+ DEFINE(FREGS_A4, offsetof(struct __arch_ftrace_regs, a4));
+ DEFINE(FREGS_A5, offsetof(struct __arch_ftrace_regs, a5));
+ DEFINE(FREGS_A6, offsetof(struct __arch_ftrace_regs, a6));
+ DEFINE(FREGS_A7, offsetof(struct __arch_ftrace_regs, a7));
#endif
}
diff --git a/arch/riscv/kernel/ftrace.c b/arch/riscv/kernel/ftrace.c
index 4b95c574fd045..5081ad886841f 100644
--- a/arch/riscv/kernel/ftrace.c
+++ b/arch/riscv/kernel/ftrace.c
@@ -214,7 +214,7 @@ void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr,
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
- prepare_ftrace_return(&fregs->ra, ip, fregs->s0);
+ prepare_ftrace_return(&arch_ftrace_regs(fregs)->ra, ip, arch_ftrace_regs(fregs)->s0);
}
#else /* CONFIG_DYNAMIC_FTRACE_WITH_ARGS */
extern void ftrace_graph_call(void);
diff --git a/arch/riscv/kernel/mcount.S b/arch/riscv/kernel/mcount.S
index 3a42f6287909d..068168046e0ef 100644
--- a/arch/riscv/kernel/mcount.S
+++ b/arch/riscv/kernel/mcount.S
@@ -12,6 +12,8 @@
#include <asm/asm-offsets.h>
#include <asm/ftrace.h>

+#define ABI_SIZE_ON_STACK 80
+
.text

.macro SAVE_ABI_STATE
@@ -26,12 +28,12 @@
* register if a0 was not saved.
*/
.macro SAVE_RET_ABI_STATE
- addi sp, sp, -4*SZREG
- REG_S s0, 2*SZREG(sp)
- REG_S ra, 3*SZREG(sp)
- REG_S a0, 1*SZREG(sp)
- REG_S a1, 0*SZREG(sp)
- addi s0, sp, 4*SZREG
+ addi sp, sp, -ABI_SIZE_ON_STACK
+ REG_S ra, 1*SZREG(sp)
+ REG_S s0, 8*SZREG(sp)
+ REG_S a0, 10*SZREG(sp)
+ REG_S a1, 11*SZREG(sp)
+ addi s0, sp, ABI_SIZE_ON_STACK
.endm

.macro RESTORE_ABI_STATE
@@ -41,11 +43,11 @@
.endm

.macro RESTORE_RET_ABI_STATE
- REG_L ra, 3*SZREG(sp)
- REG_L s0, 2*SZREG(sp)
- REG_L a0, 1*SZREG(sp)
- REG_L a1, 0*SZREG(sp)
- addi sp, sp, 4*SZREG
+ REG_L ra, 1*SZREG(sp)
+ REG_L s0, 8*SZREG(sp)
+ REG_L a0, 10*SZREG(sp)
+ REG_L a1, 11*SZREG(sp)
+ addi sp, sp, ABI_SIZE_ON_STACK
.endm

SYM_TYPED_FUNC_START(ftrace_stub)
diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c
index a30fb2fb8a2b1..2afd5f926155b 100644
--- a/arch/riscv/kernel/vector.c
+++ b/arch/riscv/kernel/vector.c
@@ -99,8 +99,8 @@ static bool insn_is_vector(u32 insn_buf)
return false;
}

-static int riscv_v_thread_zalloc(struct kmem_cache *cache,
- struct __riscv_v_ext_state *ctx)
+static int riscv_v_thread_ctx_alloc(struct kmem_cache *cache,
+ struct __riscv_v_ext_state *ctx)
{
void *datap;

@@ -110,13 +110,15 @@ static int riscv_v_thread_zalloc(struct kmem_cache *cache,

ctx->datap = datap;
memset(ctx, 0, offsetof(struct __riscv_v_ext_state, datap));
+ ctx->vlenb = riscv_v_vsize / 32;
+
return 0;
}

void riscv_v_thread_alloc(struct task_struct *tsk)
{
#ifdef CONFIG_RISCV_ISA_V_PREEMPTIVE
- riscv_v_thread_zalloc(riscv_v_kernel_cachep, &tsk->thread.kernel_vstate);
+ riscv_v_thread_ctx_alloc(riscv_v_kernel_cachep, &tsk->thread.kernel_vstate);
#endif
}

@@ -202,12 +204,14 @@ bool riscv_v_first_use_handler(struct pt_regs *regs)
* context where VS has been off. So, try to allocate the user's V
* context and resume execution.
*/
- if (riscv_v_thread_zalloc(riscv_v_user_cachep, &current->thread.vstate)) {
+ if (riscv_v_thread_ctx_alloc(riscv_v_user_cachep, &current->thread.vstate)) {
force_sig(SIGBUS);
return true;
}
+
riscv_v_vstate_on(regs);
riscv_v_vstate_set_restore(current, regs);
+
return true;
}

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 5c9349df71ccf..5e6aebf1b739a 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -184,7 +184,7 @@ config S390
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_ARG_ACCESS_API
select HAVE_FUNCTION_ERROR_INJECTION
- select HAVE_FUNCTION_GRAPH_RETVAL
+ select HAVE_FUNCTION_GRAPH_FREGS
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS
@@ -239,6 +239,7 @@ config S390
select SPARSE_IRQ
select SWIOTLB
select SYSCTL_EXCEPTION_TRACE
+ select SYSTEM_DATA_VERIFICATION if KEXEC_SIG
select THREAD_INFO_IN_TASK
select TRACE_IRQFLAGS_SUPPORT
select TTY
@@ -264,7 +265,7 @@ config ARCH_SUPPORTS_KEXEC_FILE
def_bool y

config ARCH_SUPPORTS_KEXEC_SIG
- def_bool MODULE_SIG_FORMAT
+ def_bool y

config ARCH_SUPPORTS_KEXEC_PURGATORY
def_bool y
diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h
index 406746666eb78..5b7cb49c41ee0 100644
--- a/arch/s390/include/asm/ftrace.h
+++ b/arch/s390/include/asm/ftrace.h
@@ -51,61 +51,36 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
return addr;
}

-struct ftrace_regs {
- struct pt_regs regs;
-};
+#include <linux/ftrace_regs.h>

static __always_inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
{
- struct pt_regs *regs = &fregs->regs;
+ struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;

if (test_pt_regs_flag(regs, PIF_FTRACE_FULL_REGS))
return regs;
return NULL;
}

-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-struct fgraph_ret_regs {
- unsigned long gpr2;
- unsigned long fp;
-};
-
-static __always_inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
-{
- return ret_regs->gpr2;
-}
-
-static __always_inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
+static __always_inline void
+ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
+ unsigned long ip)
{
- return ret_regs->fp;
+ arch_ftrace_regs(fregs)->regs.psw.addr = ip;
}
-#endif /* CONFIG_FUNCTION_GRAPH_TRACER */

+#undef ftrace_regs_get_frame_pointer
static __always_inline unsigned long
-ftrace_regs_get_instruction_pointer(const struct ftrace_regs *fregs)
-{
- return fregs->regs.psw.addr;
-}
-
-static __always_inline void
-ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
- unsigned long ip)
+ftrace_regs_get_frame_pointer(struct ftrace_regs *fregs)
{
- fregs->regs.psw.addr = ip;
+ return ftrace_regs_get_stack_pointer(fregs);
}

-#define ftrace_regs_get_argument(fregs, n) \
- regs_get_kernel_argument(&(fregs)->regs, n)
-#define ftrace_regs_get_stack_pointer(fregs) \
- kernel_stack_pointer(&(fregs)->regs)
-#define ftrace_regs_return_value(fregs) \
- regs_return_value(&(fregs)->regs)
-#define ftrace_regs_set_return_value(fregs, ret) \
- regs_set_return_value(&(fregs)->regs, ret)
-#define ftrace_override_function_with_return(fregs) \
- override_function_with_return(&(fregs)->regs)
-#define ftrace_regs_query_register_offset(name) \
- regs_query_register_offset(name)
+#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
+ (_regs)->psw.mask = 0; \
+ (_regs)->psw.addr = arch_ftrace_regs(fregs)->regs.psw.addr; \
+ (_regs)->gprs[15] = arch_ftrace_regs(fregs)->regs.gprs[15]; \
+ } while (0)

#ifdef CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS
/*
@@ -117,7 +92,7 @@ ftrace_regs_set_instruction_pointer(struct ftrace_regs *fregs,
*/
static inline void arch_ftrace_set_direct_caller(struct ftrace_regs *fregs, unsigned long addr)
{
- struct pt_regs *regs = &fregs->regs;
+ struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
regs->orig_gpr2 = addr;
}
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */
diff --git a/arch/s390/kernel/asm-offsets.c b/arch/s390/kernel/asm-offsets.c
index 3cfc4939033c9..8fd98a0f999f8 100644
--- a/arch/s390/kernel/asm-offsets.c
+++ b/arch/s390/kernel/asm-offsets.c
@@ -179,14 +179,8 @@ int main(void)
DEFINE(OLDMEM_SIZE, PARMAREA + offsetof(struct parmarea, oldmem_size));
DEFINE(COMMAND_LINE, PARMAREA + offsetof(struct parmarea, command_line));
DEFINE(MAX_COMMAND_LINE_SIZE, PARMAREA + offsetof(struct parmarea, max_command_line_size));
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
- /* function graph return value tracing */
- OFFSET(__FGRAPH_RET_GPR2, fgraph_ret_regs, gpr2);
- OFFSET(__FGRAPH_RET_FP, fgraph_ret_regs, fp);
- DEFINE(__FGRAPH_RET_SIZE, sizeof(struct fgraph_ret_regs));
-#endif
- OFFSET(__FTRACE_REGS_PT_REGS, ftrace_regs, regs);
- DEFINE(__FTRACE_REGS_SIZE, sizeof(struct ftrace_regs));
+ OFFSET(__FTRACE_REGS_PT_REGS, __arch_ftrace_regs, regs);
+ DEFINE(__FTRACE_REGS_SIZE, sizeof(struct __arch_ftrace_regs));

OFFSET(__PCPU_FLAGS, pcpu, flags);
return 0;
diff --git a/arch/s390/kernel/ftrace.c b/arch/s390/kernel/ftrace.c
index 0b6e62d1d8b87..51439a71e392c 100644
--- a/arch/s390/kernel/ftrace.c
+++ b/arch/s390/kernel/ftrace.c
@@ -318,7 +318,7 @@ void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip,
if (bit < 0)
return;

- kmsan_unpoison_memory(fregs, sizeof(*fregs));
+ kmsan_unpoison_memory(fregs, ftrace_regs_size());
regs = ftrace_get_regs(fregs);
p = get_kprobe((kprobe_opcode_t *)ip);
if (!regs || unlikely(!p) || kprobe_disabled(p))
diff --git a/arch/s390/kernel/mcount.S b/arch/s390/kernel/mcount.S
index 7e267ef63a7fe..2b628aa3d8095 100644
--- a/arch/s390/kernel/mcount.S
+++ b/arch/s390/kernel/mcount.S
@@ -134,14 +134,14 @@ SYM_CODE_END(ftrace_common)
SYM_FUNC_START(return_to_handler)
stmg %r2,%r5,32(%r15)
lgr %r1,%r15
- aghi %r15,-(STACK_FRAME_OVERHEAD+__FGRAPH_RET_SIZE)
+ # allocate ftrace_regs and stack frame for ftrace_return_to_handler
+ aghi %r15,-STACK_FRAME_SIZE_FREGS
stg %r1,__SF_BACKCHAIN(%r15)
- la %r3,STACK_FRAME_OVERHEAD(%r15)
- stg %r1,__FGRAPH_RET_FP(%r3)
- stg %r2,__FGRAPH_RET_GPR2(%r3)
- lgr %r2,%r3
+ stg %r2,(STACK_FREGS_PTREGS_GPRS+2*8)(%r15)
+ stg %r1,(STACK_FREGS_PTREGS_GPRS+15*8)(%r15)
+ la %r2,STACK_FRAME_OVERHEAD(%r15)
brasl %r14,ftrace_return_to_handler
- aghi %r15,STACK_FRAME_OVERHEAD+__FGRAPH_RET_SIZE
+ aghi %r15,STACK_FRAME_SIZE_FREGS
lgr %r14,%r2
lmg %r2,%r5,32(%r15)
BR_EX %r14
diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
index efdd6ead7ba81..23a2a49547b7a 100644
--- a/arch/s390/kernel/perf_cpum_sf.c
+++ b/arch/s390/kernel/perf_cpum_sf.c
@@ -856,7 +856,7 @@ static bool is_callchain_event(struct perf_event *event)
u64 sample_type = event->attr.sample_type;

return sample_type & (PERF_SAMPLE_CALLCHAIN | PERF_SAMPLE_REGS_USER |
- PERF_SAMPLE_STACK_USER);
+ PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_STACK_USER);
}

static int cpumsf_pmu_event_init(struct perf_event *event)
diff --git a/arch/s390/lib/test_unwind.c b/arch/s390/lib/test_unwind.c
index 8b7f981e6f347..6e42100875e75 100644
--- a/arch/s390/lib/test_unwind.c
+++ b/arch/s390/lib/test_unwind.c
@@ -270,9 +270,9 @@ static void notrace __used test_unwind_ftrace_handler(unsigned long ip,
struct ftrace_ops *fops,
struct ftrace_regs *fregs)
{
- struct unwindme *u = (struct unwindme *)fregs->regs.gprs[2];
+ struct unwindme *u = (struct unwindme *)arch_ftrace_regs(fregs)->regs.gprs[2];

- u->ret = test_unwind(NULL, (u->flags & UWM_REGS) ? &fregs->regs : NULL,
+ u->ret = test_unwind(NULL, (u->flags & UWM_REGS) ? &arch_ftrace_regs(fregs)->regs : NULL,
(u->flags & UWM_SP) ? u->sp : 0);
}

diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
index 9e19d6076d3e8..2fe5b91c3d90f 100644
--- a/arch/s390/pci/pci.c
+++ b/arch/s390/pci/pci.c
@@ -232,24 +232,33 @@ int zpci_fmb_disable_device(struct zpci_dev *zdev)
static int zpci_cfg_load(struct zpci_dev *zdev, int offset, u32 *val, u8 len)
{
u64 req = ZPCI_CREATE_REQ(zdev->fh, ZPCI_PCIAS_CFGSPC, len);
+ int rc = -ENODEV;
u64 data;
- int rc;
+
+ if (!zdev_enabled(zdev))
+ goto out_err;

rc = __zpci_load(&data, req, offset);
- if (!rc) {
- data = le64_to_cpu((__force __le64) data);
- data >>= (8 - len) * 8;
- *val = (u32) data;
- } else
- *val = 0xffffffff;
+ if (rc)
+ goto out_err;
+ data = le64_to_cpu((__force __le64)data);
+ data >>= (8 - len) * 8;
+ *val = (u32)data;
+ return 0;
+
+out_err:
+ PCI_SET_ERROR_RESPONSE(val);
return rc;
}

static int zpci_cfg_store(struct zpci_dev *zdev, int offset, u32 val, u8 len)
{
u64 req = ZPCI_CREATE_REQ(zdev->fh, ZPCI_PCIAS_CFGSPC, len);
+ int rc = -ENODEV;
u64 data = val;
- int rc;
+
+ if (!zdev_enabled(zdev))
+ return rc;

data <<= (8 - len) * 8;
data = (__force u64) cpu_to_le64(data);
diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
index bdcf2a3b6c41b..df6636c8b492e 100644
--- a/arch/s390/purgatory/Makefile
+++ b/arch/s390/purgatory/Makefile
@@ -21,6 +21,7 @@ KBUILD_CFLAGS += -fno-stack-protector
KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
KBUILD_CFLAGS += $(CLANG_FLAGS)
KBUILD_CFLAGS += $(call cc-option,-fno-PIE)
+KBUILD_CFLAGS += $(call cc-option, -Wno-default-const-init-unsafe)
KBUILD_AFLAGS := $(filter-out -DCC_USING_EXPOLINE,$(KBUILD_AFLAGS))

# Since we link purgatory with -r unresolved symbols are not checked, so we
diff --git a/arch/sparc/include/uapi/asm/ioctls.h b/arch/sparc/include/uapi/asm/ioctls.h
index 7fd2f5873c9e7..a8bbdf9877a41 100644
--- a/arch/sparc/include/uapi/asm/ioctls.h
+++ b/arch/sparc/include/uapi/asm/ioctls.h
@@ -5,10 +5,10 @@
#include <asm/ioctl.h>

/* Big T */
-#define TCGETA _IOR('T', 1, struct termio)
-#define TCSETA _IOW('T', 2, struct termio)
-#define TCSETAW _IOW('T', 3, struct termio)
-#define TCSETAF _IOW('T', 4, struct termio)
+#define TCGETA 0x40125401 /* _IOR('T', 1, struct termio) */
+#define TCSETA 0x80125402 /* _IOW('T', 2, struct termio) */
+#define TCSETAW 0x80125403 /* _IOW('T', 3, struct termio) */
+#define TCSETAF 0x80125404 /* _IOW('T', 4, struct termio) */
#define TCSBRK _IO('T', 5)
#define TCXONC _IO('T', 6)
#define TCFLSH _IO('T', 7)
diff --git a/arch/sparc/kernel/process.c b/arch/sparc/kernel/process.c
index 0442ab00518d3..7d69877511fac 100644
--- a/arch/sparc/kernel/process.c
+++ b/arch/sparc/kernel/process.c
@@ -17,14 +17,18 @@

asmlinkage long sparc_fork(struct pt_regs *regs)
{
- unsigned long orig_i1 = regs->u_regs[UREG_I1];
+ unsigned long orig_i1;
long ret;
struct kernel_clone_args args = {
.exit_signal = SIGCHLD,
- /* Reuse the parent's stack for the child. */
- .stack = regs->u_regs[UREG_FP],
};

+ synchronize_user_stack();
+
+ orig_i1 = regs->u_regs[UREG_I1];
+ /* Reuse the parent's stack for the child. */
+ args.stack = regs->u_regs[UREG_FP];
+
ret = kernel_clone(&args);

/* If we get an error and potentially restart the system
@@ -40,16 +44,19 @@ asmlinkage long sparc_fork(struct pt_regs *regs)

asmlinkage long sparc_vfork(struct pt_regs *regs)
{
- unsigned long orig_i1 = regs->u_regs[UREG_I1];
+ unsigned long orig_i1;
long ret;
-
struct kernel_clone_args args = {
.flags = CLONE_VFORK | CLONE_VM,
.exit_signal = SIGCHLD,
- /* Reuse the parent's stack for the child. */
- .stack = regs->u_regs[UREG_FP],
};

+ synchronize_user_stack();
+
+ orig_i1 = regs->u_regs[UREG_I1];
+ /* Reuse the parent's stack for the child. */
+ args.stack = regs->u_regs[UREG_FP];
+
ret = kernel_clone(&args);

/* If we get an error and potentially restart the system
@@ -65,15 +72,18 @@ asmlinkage long sparc_vfork(struct pt_regs *regs)

asmlinkage long sparc_clone(struct pt_regs *regs)
{
- unsigned long orig_i1 = regs->u_regs[UREG_I1];
- unsigned int flags = lower_32_bits(regs->u_regs[UREG_I0]);
+ unsigned long orig_i1;
+ unsigned int flags;
long ret;
+ struct kernel_clone_args args = {0};

- struct kernel_clone_args args = {
- .flags = (flags & ~CSIGNAL),
- .exit_signal = (flags & CSIGNAL),
- .tls = regs->u_regs[UREG_I3],
- };
+ synchronize_user_stack();
+
+ orig_i1 = regs->u_regs[UREG_I1];
+ flags = lower_32_bits(regs->u_regs[UREG_I0]);
+ args.flags = (flags & ~CSIGNAL);
+ args.exit_signal = (flags & CSIGNAL);
+ args.tls = regs->u_regs[UREG_I3];

#ifdef CONFIG_COMPAT
if (test_thread_flag(TIF_32BIT)) {
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index df14d0e67ea0c..d1c73d5ed32f7 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -231,7 +231,7 @@ config X86
select HAVE_GUP_FAST
select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
- select HAVE_FUNCTION_GRAPH_RETVAL if HAVE_FUNCTION_GRAPH_TRACER
+ select HAVE_FUNCTION_GRAPH_FREGS if HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_GRAPH_TRACER if X86_32 || (X86_64 && DYNAMIC_FTRACE)
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 86ba035f17a35..0f935cced0b7d 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -3051,8 +3051,8 @@ void perf_get_x86_pmu_capability(struct x86_pmu_capability *cap)
cap->version = x86_pmu.version;
cap->num_counters_gp = x86_pmu_num_counters(NULL);
cap->num_counters_fixed = x86_pmu_num_counters_fixed(NULL);
- cap->bit_width_gp = x86_pmu.cntval_bits;
- cap->bit_width_fixed = x86_pmu.cntval_bits;
+ cap->bit_width_gp = cap->num_counters_gp ? x86_pmu.cntval_bits : 0;
+ cap->bit_width_fixed = cap->num_counters_fixed ? x86_pmu.cntval_bits : 0;
cap->events_mask = (unsigned int)x86_pmu.events_maskl;
cap->events_mask_len = x86_pmu.events_mask_len;
cap->pebs_ept = x86_pmu.pebs_ept;
diff --git a/arch/x86/events/intel/cstate.c b/arch/x86/events/intel/cstate.c
index aee2dfc108408..05bc2b799dd29 100644
--- a/arch/x86/events/intel/cstate.c
+++ b/arch/x86/events/intel/cstate.c
@@ -597,6 +597,7 @@ static const struct x86_cpu_id intel_cstates_match[] __initconst = {
X86_MATCH_VFM(INTEL_ATOM_SILVERMONT, &slm_cstates),
X86_MATCH_VFM(INTEL_ATOM_SILVERMONT_D, &slm_cstates),
X86_MATCH_VFM(INTEL_ATOM_AIRMONT, &slm_cstates),
+ X86_MATCH_VFM(INTEL_ATOM_AIRMONT_NP, &slm_cstates),

X86_MATCH_VFM(INTEL_BROADWELL, &snb_cstates),
X86_MATCH_VFM(INTEL_BROADWELL_D, &snb_cstates),
diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c
index 45b1866ff0513..3a05ea2871dc5 100644
--- a/arch/x86/events/msr.c
+++ b/arch/x86/events/msr.c
@@ -76,6 +76,7 @@ static bool test_intel(int idx, void *data)
case INTEL_ATOM_SILVERMONT:
case INTEL_ATOM_SILVERMONT_D:
case INTEL_ATOM_AIRMONT:
+ case INTEL_ATOM_AIRMONT_NP:

case INTEL_ATOM_GOLDMONT:
case INTEL_ATOM_GOLDMONT_D:
diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
index 2510e91b29b08..b135ba5c18710 100644
--- a/arch/x86/hyperv/hv_vtl.c
+++ b/arch/x86/hyperv/hv_vtl.c
@@ -68,7 +68,7 @@ static void hv_vtl_ap_entry(void)

static int hv_vtl_bringup_vcpu(u32 target_vp_index, int cpu, u64 eip_ignored)
{
- u64 status;
+ u64 status, rsp, rip;
int ret = 0;
struct hv_enable_vp_vtl *input;
unsigned long irq_flags;
@@ -81,9 +81,11 @@ static int hv_vtl_bringup_vcpu(u32 target_vp_index, int cpu, u64 eip_ignored)
struct desc_struct *gdt;

struct task_struct *idle = idle_thread_get(cpu);
- u64 rsp = (unsigned long)idle->thread.sp;
+ if (IS_ERR(idle))
+ return PTR_ERR(idle);

- u64 rip = (u64)&hv_vtl_ap_entry;
+ rsp = (unsigned long)idle->thread.sp;
+ rip = (u64)&hv_vtl_ap_entry;

native_store_gdt(&gdt_ptr);
store_idt(&idt_ptr);
diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
index b4d719de2c845..c42f7169fc1c0 100644
--- a/arch/x86/include/asm/ftrace.h
+++ b/arch/x86/include/asm/ftrace.h
@@ -35,37 +35,33 @@ static inline unsigned long ftrace_call_adjust(unsigned long addr)
}

#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
-struct ftrace_regs {
- struct pt_regs regs;
-};
+
+#include <linux/ftrace_regs.h>

static __always_inline struct pt_regs *
arch_ftrace_get_regs(struct ftrace_regs *fregs)
{
/* Only when FL_SAVE_REGS is set, cs will be non zero */
- if (!fregs->regs.cs)
+ if (!arch_ftrace_regs(fregs)->regs.cs)
return NULL;
- return &fregs->regs;
+ return &arch_ftrace_regs(fregs)->regs;
}

+#define arch_ftrace_partial_regs(regs) do { \
+ regs->flags |= X86_EFLAGS_FIXED; \
+ regs->cs = __KERNEL_CS; \
+} while (0)
+
+#define arch_ftrace_fill_perf_regs(fregs, _regs) do { \
+ (_regs)->ip = arch_ftrace_regs(fregs)->regs.ip; \
+ (_regs)->sp = arch_ftrace_regs(fregs)->regs.sp; \
+ (_regs)->cs = __KERNEL_CS; \
+ (_regs)->flags = 0; \
+ } while (0)
+
#define ftrace_regs_set_instruction_pointer(fregs, _ip) \
- do { (fregs)->regs.ip = (_ip); } while (0)
-
-#define ftrace_regs_get_instruction_pointer(fregs) \
- ((fregs)->regs.ip)
-
-#define ftrace_regs_get_argument(fregs, n) \
- regs_get_kernel_argument(&(fregs)->regs, n)
-#define ftrace_regs_get_stack_pointer(fregs) \
- kernel_stack_pointer(&(fregs)->regs)
-#define ftrace_regs_return_value(fregs) \
- regs_return_value(&(fregs)->regs)
-#define ftrace_regs_set_return_value(fregs, ret) \
- regs_set_return_value(&(fregs)->regs, ret)
-#define ftrace_override_function_with_return(fregs) \
- override_function_with_return(&(fregs)->regs)
-#define ftrace_regs_query_register_offset(name) \
- regs_query_register_offset(name)
+ do { arch_ftrace_regs(fregs)->regs.ip = (_ip); } while (0)
+

struct ftrace_ops;
#define ftrace_graph_func ftrace_graph_func
@@ -90,7 +86,7 @@ __arch_ftrace_set_direct_caller(struct pt_regs *regs, unsigned long addr)
regs->orig_ax = addr;
}
#define arch_ftrace_set_direct_caller(fregs, addr) \
- __arch_ftrace_set_direct_caller(&(fregs)->regs, addr)
+ __arch_ftrace_set_direct_caller(&arch_ftrace_regs(fregs)->regs, addr)
#endif /* CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS */

#ifdef CONFIG_DYNAMIC_FTRACE
@@ -150,24 +146,4 @@ static inline bool arch_trace_is_compat_syscall(struct pt_regs *regs)
#endif /* !COMPILE_OFFSETS */
#endif /* !__ASSEMBLY__ */

-#ifndef __ASSEMBLY__
-#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-struct fgraph_ret_regs {
- unsigned long ax;
- unsigned long dx;
- unsigned long bp;
-};
-
-static inline unsigned long fgraph_ret_regs_return_value(struct fgraph_ret_regs *ret_regs)
-{
- return ret_regs->ax;
-}
-
-static inline unsigned long fgraph_ret_regs_frame_pointer(struct fgraph_ret_regs *ret_regs)
-{
- return ret_regs->bp;
-}
-#endif /* ifdef CONFIG_FUNCTION_GRAPH_TRACER */
-#endif
-
#endif /* _ASM_X86_FTRACE_H */
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index bfab966ea56e8..d3b14a9ad2edb 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -647,7 +647,7 @@ void prepare_ftrace_return(unsigned long ip, unsigned long *parent,
void ftrace_graph_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
- struct pt_regs *regs = &fregs->regs;
+ struct pt_regs *regs = &arch_ftrace_regs(fregs)->regs;
unsigned long *stack = (unsigned long *)kernel_stack_pointer(regs);

prepare_ftrace_return(ip, (unsigned long *)stack, 0);
diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S
index 58d9ed50fe617..f4e0c33612342 100644
--- a/arch/x86/kernel/ftrace_32.S
+++ b/arch/x86/kernel/ftrace_32.S
@@ -187,14 +187,15 @@ SYM_CODE_END(ftrace_graph_caller)

.globl return_to_handler
return_to_handler:
- pushl $0
- pushl %edx
- pushl %eax
+ subl $(PTREGS_SIZE), %esp
+ movl $0, PT_EBP(%esp)
+ movl %edx, PT_EDX(%esp)
+ movl %eax, PT_EAX(%esp)
movl %esp, %eax
call ftrace_return_to_handler
movl %eax, %ecx
- popl %eax
- popl %edx
- addl $4, %esp # skip ebp
+ movl PT_EAX(%esp), %eax
+ movl PT_EDX(%esp), %edx
+ addl $(PTREGS_SIZE), %esp
JMP_NOSPEC ecx
#endif
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 214f30e9f0c01..8a3cff618692c 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -348,21 +348,28 @@ STACK_FRAME_NON_STANDARD_FP(__fentry__)
SYM_CODE_START(return_to_handler)
UNWIND_HINT_UNDEFINED
ANNOTATE_NOENDBR
- subq $24, %rsp

- /* Save the return values */
- movq %rax, (%rsp)
- movq %rdx, 8(%rsp)
- movq %rbp, 16(%rsp)
+ /* Restore return_to_handler value that got eaten by previous ret instruction. */
+ subq $8, %rsp
+ UNWIND_HINT_FUNC
+
+ /* Save ftrace_regs for function exit context */
+ subq $(FRAME_SIZE), %rsp
+
+ movq %rax, RAX(%rsp)
+ movq %rdx, RDX(%rsp)
+ movq %rbp, RBP(%rsp)
+ movq %rsp, RSP(%rsp)
movq %rsp, %rdi

call ftrace_return_to_handler

movq %rax, %rdi
- movq 8(%rsp), %rdx
- movq (%rsp), %rax
+ movq RDX(%rsp), %rdx
+ movq RAX(%rsp), %rax
+
+ addq $(FRAME_SIZE) + 8, %rsp

- addq $24, %rsp
/*
* Jump back to the old return address. This cannot be JMP_NOSPEC rdi
* since IBT would demand that contain ENDBR, which simply isn't so for
diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
index 68530fad05f74..c776b01e87898 100644
--- a/arch/x86/kernel/kexec-bzimage64.c
+++ b/arch/x86/kernel/kexec-bzimage64.c
@@ -184,6 +184,13 @@ setup_efi_state(struct boot_params *params, unsigned long params_load_addr,
struct efi_info *current_ei = &boot_params.efi_info;
struct efi_info *ei = &params->efi_info;

+ if (!params->acpi_rsdp_addr) {
+ if (efi.acpi20 != EFI_INVALID_TABLE_ADDR)
+ params->acpi_rsdp_addr = efi.acpi20;
+ else if (efi.acpi != EFI_INVALID_TABLE_ADDR)
+ params->acpi_rsdp_addr = efi.acpi;
+ }
+
if (!efi_enabled(EFI_RUNTIME_SERVICES))
return 0;

diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index f1fea506e20f4..234c4d9e50b8f 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -372,9 +372,15 @@ int __init ima_free_kexec_buffer(void)

int __init ima_get_kexec_buffer(void **addr, size_t *size)
{
+ int ret;
+
if (!ima_kexec_buffer_size)
return -ENOENT;

+ ret = ima_validate_range(ima_kexec_buffer_phys, ima_kexec_buffer_size);
+ if (ret)
+ return ret;
+
*addr = __va(ima_kexec_buffer_phys);
*size = ima_kexec_buffer_size;

diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
index c6556599376e2..862758eeac846 100644
--- a/arch/x86/kvm/svm/nested.c
+++ b/arch/x86/kvm/svm/nested.c
@@ -1760,10 +1760,9 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu,
* thus MMU might not be initialized correctly.
* Set it again to fix this.
*/
-
ret = nested_svm_load_cr3(&svm->vcpu, vcpu->arch.cr3,
nested_npt_enabled(svm), false);
- if (WARN_ON_ONCE(ret))
+ if (ret)
goto out_free;

svm->nested.force_msr_bitmap_recalc = true;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 85816c4186b96..853a86dfc8f1b 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -2358,12 +2358,13 @@ static int vmload_vmsave_interception(struct kvm_vcpu *vcpu, bool vmload)

ret = kvm_skip_emulated_instruction(vcpu);

+ /* KVM always performs VMLOAD/VMSAVE on VMCB01 (see __svm_vcpu_run()) */
if (vmload) {
- svm_copy_vmloadsave_state(svm->vmcb, vmcb12);
+ svm_copy_vmloadsave_state(svm->vmcb01.ptr, vmcb12);
svm->sysenter_eip_hi = 0;
svm->sysenter_esp_hi = 0;
} else {
- svm_copy_vmloadsave_state(vmcb12, svm->vmcb);
+ svm_copy_vmloadsave_state(vmcb12, svm->vmcb01.ptr);
}

kvm_vcpu_unmap(vcpu, &map, true);
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 766a9ce2da58b..8f673aaa0490f 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3968,47 +3968,47 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
break;
case MSR_KVM_WALL_CLOCK_NEW:
if (!guest_pv_has(vcpu, KVM_FEATURE_CLOCKSOURCE2))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

vcpu->kvm->arch.wall_clock = data;
kvm_write_wall_clock(vcpu->kvm, data, 0);
break;
case MSR_KVM_WALL_CLOCK:
if (!guest_pv_has(vcpu, KVM_FEATURE_CLOCKSOURCE))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

vcpu->kvm->arch.wall_clock = data;
kvm_write_wall_clock(vcpu->kvm, data, 0);
break;
case MSR_KVM_SYSTEM_TIME_NEW:
if (!guest_pv_has(vcpu, KVM_FEATURE_CLOCKSOURCE2))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

kvm_write_system_time(vcpu, data, false, msr_info->host_initiated);
break;
case MSR_KVM_SYSTEM_TIME:
if (!guest_pv_has(vcpu, KVM_FEATURE_CLOCKSOURCE))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

kvm_write_system_time(vcpu, data, true, msr_info->host_initiated);
break;
case MSR_KVM_ASYNC_PF_EN:
if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

if (kvm_pv_enable_async_pf(vcpu, data))
return 1;
break;
case MSR_KVM_ASYNC_PF_INT:
if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

if (kvm_pv_enable_async_pf_int(vcpu, data))
return 1;
break;
case MSR_KVM_ASYNC_PF_ACK:
if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;
if (data & 0x1) {
vcpu->arch.apf.pageready_pending = false;
kvm_check_async_pf_completion(vcpu);
@@ -4016,7 +4016,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
break;
case MSR_KVM_STEAL_TIME:
if (!guest_pv_has(vcpu, KVM_FEATURE_STEAL_TIME))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

if (unlikely(!sched_info_on()))
return 1;
@@ -4034,7 +4034,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
break;
case MSR_KVM_PV_EOI_EN:
if (!guest_pv_has(vcpu, KVM_FEATURE_PV_EOI))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

if (kvm_lapic_set_pv_eoi(vcpu, data, sizeof(u8)))
return 1;
@@ -4042,7 +4042,7 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)

case MSR_KVM_POLL_CONTROL:
if (!guest_pv_has(vcpu, KVM_FEATURE_POLL_CONTROL))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

/* only enable bit supported */
if (data & (-1ULL << 1))
@@ -4343,61 +4343,61 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
break;
case MSR_KVM_WALL_CLOCK:
if (!guest_pv_has(vcpu, KVM_FEATURE_CLOCKSOURCE))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = vcpu->kvm->arch.wall_clock;
break;
case MSR_KVM_WALL_CLOCK_NEW:
if (!guest_pv_has(vcpu, KVM_FEATURE_CLOCKSOURCE2))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = vcpu->kvm->arch.wall_clock;
break;
case MSR_KVM_SYSTEM_TIME:
if (!guest_pv_has(vcpu, KVM_FEATURE_CLOCKSOURCE))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = vcpu->arch.time;
break;
case MSR_KVM_SYSTEM_TIME_NEW:
if (!guest_pv_has(vcpu, KVM_FEATURE_CLOCKSOURCE2))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = vcpu->arch.time;
break;
case MSR_KVM_ASYNC_PF_EN:
if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = vcpu->arch.apf.msr_en_val;
break;
case MSR_KVM_ASYNC_PF_INT:
if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = vcpu->arch.apf.msr_int_val;
break;
case MSR_KVM_ASYNC_PF_ACK:
if (!guest_pv_has(vcpu, KVM_FEATURE_ASYNC_PF_INT))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = 0;
break;
case MSR_KVM_STEAL_TIME:
if (!guest_pv_has(vcpu, KVM_FEATURE_STEAL_TIME))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = vcpu->arch.st.msr_val;
break;
case MSR_KVM_PV_EOI_EN:
if (!guest_pv_has(vcpu, KVM_FEATURE_PV_EOI))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = vcpu->arch.pv_eoi.msr_val;
break;
case MSR_KVM_POLL_CONTROL:
if (!guest_pv_has(vcpu, KVM_FEATURE_POLL_CONTROL))
- return 1;
+ return KVM_MSR_RET_UNSUPPORTED;

msr_info->data = vcpu->arch.msr_kvm_poll_control;
break;
@@ -11815,9 +11815,11 @@ static void __get_sregs2(struct kvm_vcpu *vcpu, struct kvm_sregs2 *sregs2)
return;

if (is_pae_paging(vcpu)) {
+ kvm_vcpu_srcu_read_lock(vcpu);
for (i = 0 ; i < 4 ; i++)
sregs2->pdptrs[i] = kvm_pdptr_read(vcpu, i);
sregs2->flags |= KVM_SREGS2_FLAGS_PDPTRS_VALID;
+ kvm_vcpu_srcu_read_unlock(vcpu);
}
}

diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
index ce4fd8d33da46..2774cf74fa940 100644
--- a/arch/x86/platform/pvh/head.S
+++ b/arch/x86/platform/pvh/head.S
@@ -91,10 +91,12 @@ SYM_CODE_START_LOCAL(pvh_start_xen)

leal rva(early_stack_end)(%ebp), %esp

+#if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE)
/* Enable PAE mode. */
mov %cr4, %eax
orl $X86_CR4_PAE, %eax
mov %eax, %cr4
+#endif

#ifdef CONFIG_X86_64
/* Enable Long mode. */
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 0c950bbca309f..86dd33f1aeaab 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -474,7 +474,7 @@ int __init arch_xen_unpopulated_init(struct resource **res)
* driver to know how much of the physmap is unpopulated and
* set an accurate initial memory target.
*/
- xen_released_pages += xen_extra_mem[i].n_pfns;
+ xen_unpopulated_pages += xen_extra_mem[i].n_pfns;
/* Zero so region is not also added to the balloon driver. */
xen_extra_mem[i].n_pfns = 0;
}
diff --git a/block/bio.c b/block/bio.c
index 094a5adf79d23..b919f3fa2f2d4 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1119,6 +1119,22 @@ void __bio_add_page(struct bio *bio, struct page *page,
}
EXPORT_SYMBOL_GPL(__bio_add_page);

+/**
+ * bio_add_virt_nofail - add data in the direct kernel mapping to a bio
+ * @bio: destination bio
+ * @vaddr: data to add
+ * @len: length of the data to add, may cross pages
+ *
+ * Add the data at @vaddr to @bio. The caller must have ensure a segment
+ * is available for the added data. No merging into an existing segment
+ * will be performed.
+ */
+void bio_add_virt_nofail(struct bio *bio, void *vaddr, unsigned len)
+{
+ __bio_add_page(bio, virt_to_page(vaddr), len, offset_in_page(vaddr));
+}
+EXPORT_SYMBOL_GPL(bio_add_virt_nofail);
+
/**
* bio_add_page - attempt to add page(s) to bio
* @bio: destination bio
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 7ddd7dd23dda8..68dc8076d8cbc 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -130,8 +130,9 @@ static struct bio *bio_submit_split(struct bio *bio, int split_sectors)
return bio;
}

-struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
- unsigned *nsegs)
+static struct bio *__bio_split_discard(struct bio *bio,
+ const struct queue_limits *lim, unsigned *nsegs,
+ unsigned int max_sectors)
{
unsigned int max_discard_sectors, granularity;
sector_t tmp;
@@ -141,8 +142,7 @@ struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,

granularity = max(lim->discard_granularity >> 9, 1U);

- max_discard_sectors =
- min(lim->max_discard_sectors, bio_allowed_max_sectors(lim));
+ max_discard_sectors = min(max_sectors, bio_allowed_max_sectors(lim));
max_discard_sectors -= max_discard_sectors % granularity;
if (unlikely(!max_discard_sectors))
return bio;
@@ -166,6 +166,19 @@ struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
return bio_submit_split(bio, split_sectors);
}

+struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
+ unsigned *nsegs)
+{
+ unsigned int max_sectors;
+
+ if (bio_op(bio) == REQ_OP_SECURE_ERASE)
+ max_sectors = lim->max_secure_erase_sectors;
+ else
+ max_sectors = lim->max_discard_sectors;
+
+ return __bio_split_discard(bio, lim, nsegs, max_sectors);
+}
+
static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim,
bool is_atomic)
{
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 5463697a84428..cbc8bbde3a2bb 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -713,8 +713,10 @@ void blk_mq_debugfs_register_hctxs(struct request_queue *q)
struct blk_mq_hw_ctx *hctx;
unsigned long i;

+ mutex_lock(&q->debugfs_mutex);
queue_for_each_hw_ctx(q, hctx, i)
blk_mq_debugfs_register_hctx(q, hctx);
+ mutex_unlock(&q->debugfs_mutex);
}

void blk_mq_debugfs_unregister_hctxs(struct request_queue *q)
diff --git a/block/blk.h b/block/blk.h
index 63c92db0a5079..e7d7c5c636524 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -193,10 +193,14 @@ static inline unsigned int blk_queue_get_max_sectors(struct request *rq)
struct request_queue *q = rq->q;
enum req_op op = req_op(rq);

- if (unlikely(op == REQ_OP_DISCARD || op == REQ_OP_SECURE_ERASE))
+ if (unlikely(op == REQ_OP_DISCARD))
return min(q->limits.max_discard_sectors,
UINT_MAX >> SECTOR_SHIFT);

+ if (unlikely(op == REQ_OP_SECURE_ERASE))
+ return min(q->limits.max_secure_erase_sectors,
+ UINT_MAX >> SECTOR_SHIFT);
+
if (unlikely(op == REQ_OP_WRITE_ZEROES))
return q->limits.max_write_zeroes_sectors;

diff --git a/drivers/acpi/acpi_processor.c b/drivers/acpi/acpi_processor.c
index 2a99f5eb69629..d8674aee28c2e 100644
--- a/drivers/acpi/acpi_processor.c
+++ b/drivers/acpi/acpi_processor.c
@@ -50,6 +50,7 @@ static int acpi_processor_errata_piix4(struct pci_dev *dev)
{
u8 value1 = 0;
u8 value2 = 0;
+ struct pci_dev *ide_dev = NULL, *isa_dev = NULL;


if (!dev)
@@ -107,12 +108,12 @@ static int acpi_processor_errata_piix4(struct pci_dev *dev)
* each IDE controller's DMA status to make sure we catch all
* DMA activity.
*/
- dev = pci_get_subsys(PCI_VENDOR_ID_INTEL,
+ ide_dev = pci_get_subsys(PCI_VENDOR_ID_INTEL,
PCI_DEVICE_ID_INTEL_82371AB,
PCI_ANY_ID, PCI_ANY_ID, NULL);
- if (dev) {
- errata.piix4.bmisx = pci_resource_start(dev, 4);
- pci_dev_put(dev);
+ if (ide_dev) {
+ errata.piix4.bmisx = pci_resource_start(ide_dev, 4);
+ pci_dev_put(ide_dev);
}

/*
@@ -124,24 +125,25 @@ static int acpi_processor_errata_piix4(struct pci_dev *dev)
* disable C3 support if this is enabled, as some legacy
* devices won't operate well if fast DMA is disabled.
*/
- dev = pci_get_subsys(PCI_VENDOR_ID_INTEL,
+ isa_dev = pci_get_subsys(PCI_VENDOR_ID_INTEL,
PCI_DEVICE_ID_INTEL_82371AB_0,
PCI_ANY_ID, PCI_ANY_ID, NULL);
- if (dev) {
- pci_read_config_byte(dev, 0x76, &value1);
- pci_read_config_byte(dev, 0x77, &value2);
+ if (isa_dev) {
+ pci_read_config_byte(isa_dev, 0x76, &value1);
+ pci_read_config_byte(isa_dev, 0x77, &value2);
if ((value1 & 0x80) || (value2 & 0x80))
errata.piix4.fdma = 1;
- pci_dev_put(dev);
+ pci_dev_put(isa_dev);
}

break;
}

- if (errata.piix4.bmisx)
- dev_dbg(&dev->dev, "Bus master activity detection (BM-IDE) erratum enabled\n");
- if (errata.piix4.fdma)
- dev_dbg(&dev->dev, "Type-F DMA livelock erratum (C3 disabled)\n");
+ if (ide_dev)
+ dev_dbg(&ide_dev->dev, "Bus master activity detection (BM-IDE) erratum enabled\n");
+
+ if (isa_dev)
+ dev_dbg(&isa_dev->dev, "Type-F DMA livelock erratum (C3 disabled)\n");

return 0;
}
diff --git a/drivers/acpi/acpica/evregion.c b/drivers/acpi/acpica/evregion.c
index cf53b9535f18e..7788c27ccf461 100644
--- a/drivers/acpi/acpica/evregion.c
+++ b/drivers/acpi/acpica/evregion.c
@@ -163,7 +163,9 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj,
return_ACPI_STATUS(AE_NOT_EXIST);
}

- if (region_obj->region.space_id == ACPI_ADR_SPACE_PLATFORM_COMM) {
+ if (field_obj
+ && region_obj->region.space_id ==
+ ACPI_ADR_SPACE_PLATFORM_COMM) {
struct acpi_pcc_info *ctx =
handler_desc->address_space.context;

diff --git a/drivers/acpi/acpica/exoparg3.c b/drivers/acpi/acpica/exoparg3.c
index d3091f619909e..41758343657dd 100644
--- a/drivers/acpi/acpica/exoparg3.c
+++ b/drivers/acpi/acpica/exoparg3.c
@@ -10,6 +10,7 @@
#include <acpi/acpi.h>
#include "accommon.h"
#include "acinterp.h"
+#include <acpi/acoutput.h>
#include "acparser.h"
#include "amlcode.h"

@@ -51,8 +52,7 @@ ACPI_MODULE_NAME("exoparg3")
acpi_status acpi_ex_opcode_3A_0T_0R(struct acpi_walk_state *walk_state)
{
union acpi_operand_object **operand = &walk_state->operands[0];
- struct acpi_signal_fatal_info *fatal;
- acpi_status status = AE_OK;
+ struct acpi_signal_fatal_info fatal;

ACPI_FUNCTION_TRACE_STR(ex_opcode_3A_0T_0R,
acpi_ps_get_opcode_name(walk_state->opcode));
@@ -60,28 +60,23 @@ acpi_status acpi_ex_opcode_3A_0T_0R(struct acpi_walk_state *walk_state)
switch (walk_state->opcode) {
case AML_FATAL_OP: /* Fatal (fatal_type fatal_code fatal_arg) */

- ACPI_DEBUG_PRINT((ACPI_DB_INFO,
- "FatalOp: Type %X Code %X Arg %X "
- "<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\n",
- (u32)operand[0]->integer.value,
- (u32)operand[1]->integer.value,
- (u32)operand[2]->integer.value));
-
- fatal = ACPI_ALLOCATE(sizeof(struct acpi_signal_fatal_info));
- if (fatal) {
- fatal->type = (u32) operand[0]->integer.value;
- fatal->code = (u32) operand[1]->integer.value;
- fatal->argument = (u32) operand[2]->integer.value;
- }
+ fatal.type = (u32)operand[0]->integer.value;
+ fatal.code = (u32)operand[1]->integer.value;
+ fatal.argument = (u32)operand[2]->integer.value;

- /* Always signal the OS! */
+ ACPI_BIOS_ERROR((AE_INFO,
+ "Fatal ACPI BIOS error (Type 0x%X Code 0x%X Arg 0x%X)\n",
+ fatal.type, fatal.code, fatal.argument));

- status = acpi_os_signal(ACPI_SIGNAL_FATAL, fatal);
+ /* Always signal the OS! */

- /* Might return while OS is shutting down, just continue */
+ acpi_os_signal(ACPI_SIGNAL_FATAL, &fatal);

- ACPI_FREE(fatal);
- goto cleanup;
+ /*
+ * Might return while OS is shutting down, so abort the AML execution
+ * by returning an error.
+ */
+ return_ACPI_STATUS(AE_ERROR);

case AML_EXTERNAL_OP:
/*
@@ -93,21 +88,16 @@ acpi_status acpi_ex_opcode_3A_0T_0R(struct acpi_walk_state *walk_state)
* wrong if an external opcode ever gets here.
*/
ACPI_ERROR((AE_INFO, "Executed External Op"));
- status = AE_OK;
- goto cleanup;
+
+ return_ACPI_STATUS(AE_OK);

default:

ACPI_ERROR((AE_INFO, "Unknown AML opcode 0x%X",
walk_state->opcode));

- status = AE_AML_BAD_OPCODE;
- goto cleanup;
+ return_ACPI_STATUS(AE_AML_BAD_OPCODE);
}
-
-cleanup:
-
- return_ACPI_STATUS(status);
}

/*******************************************************************************
diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
index 45fa2510e4cf5..98901fbe2f7b7 100644
--- a/drivers/acpi/apei/ghes.c
+++ b/drivers/acpi/apei/ghes.c
@@ -29,6 +29,7 @@
#include <linux/cper.h>
#include <linux/cleanup.h>
#include <linux/platform_device.h>
+#include <linux/minmax.h>
#include <linux/mutex.h>
#include <linux/ratelimit.h>
#include <linux/vmalloc.h>
@@ -293,6 +294,7 @@ static struct ghes *ghes_new(struct acpi_hest_generic *generic)
error_block_length = GHES_ESTATUS_MAX_SIZE;
}
ghes->estatus = kmalloc(error_block_length, GFP_KERNEL);
+ ghes->estatus_length = error_block_length;
if (!ghes->estatus) {
rc = -ENOMEM;
goto err_unmap_status_addr;
@@ -364,13 +366,15 @@ static int __ghes_check_estatus(struct ghes *ghes,
struct acpi_hest_generic_status *estatus)
{
u32 len = cper_estatus_len(estatus);
+ u32 max_len = min(ghes->generic->error_block_length,
+ ghes->estatus_length);

if (len < sizeof(*estatus)) {
pr_warn_ratelimited(FW_WARN GHES_PFX "Truncated error status block!\n");
return -EIO;
}

- if (len > ghes->generic->error_block_length) {
+ if (!len || len > max_len) {
pr_warn_ratelimited(FW_WARN GHES_PFX "Invalid error status block length!\n");
return -EIO;
}
@@ -532,21 +536,45 @@ static bool ghes_handle_arm_hw_error(struct acpi_hest_generic_data *gdata,
{
struct cper_sec_proc_arm *err = acpi_hest_get_payload(gdata);
int flags = sync ? MF_ACTION_REQUIRED : 0;
+ int length = gdata->error_data_length;
char error_type[120];
bool queued = false;
int sec_sev, i;
char *p;

sec_sev = ghes_severity(gdata->error_severity);
- log_arm_hw_error(err, sec_sev);
+ if (length >= sizeof(*err)) {
+ log_arm_hw_error(err, sec_sev);
+ } else {
+ pr_warn(FW_BUG "arm error length: %d\n", length);
+ pr_warn(FW_BUG "length is too small\n");
+ pr_warn(FW_BUG "firmware-generated error record is incorrect\n");
+ return false;
+ }
+
if (sev != GHES_SEV_RECOVERABLE || sec_sev != GHES_SEV_RECOVERABLE)
return false;

p = (char *)(err + 1);
+ length -= sizeof(err);
+
for (i = 0; i < err->err_info_num; i++) {
- struct cper_arm_err_info *err_info = (struct cper_arm_err_info *)p;
- bool is_cache = err_info->type & CPER_ARM_CACHE_ERROR;
- bool has_pa = (err_info->validation_bits & CPER_ARM_INFO_VALID_PHYSICAL_ADDR);
+ struct cper_arm_err_info *err_info;
+ bool is_cache, has_pa;
+
+ /* Ensure we have enough data for the error info header */
+ if (length < sizeof(*err_info))
+ break;
+
+ err_info = (struct cper_arm_err_info *)p;
+
+ /* Validate the claimed length before using it */
+ length -= err_info->length;
+ if (length < 0)
+ break;
+
+ is_cache = err_info->type & CPER_ARM_CACHE_ERROR;
+ has_pa = (err_info->validation_bits & CPER_ARM_INFO_VALID_PHYSICAL_ADDR);

/*
* The field (err_info->error_info & BIT(26)) is fixed to set to
diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
index 11c7e35fafa25..a33d60e625f81 100644
--- a/drivers/acpi/battery.c
+++ b/drivers/acpi/battery.c
@@ -212,7 +212,14 @@ static int acpi_battery_get_property(struct power_supply *psy,
if (battery->state & ACPI_BATTERY_STATE_DISCHARGING)
val->intval = acpi_battery_handle_discharging(battery);
else if (battery->state & ACPI_BATTERY_STATE_CHARGING)
- val->intval = POWER_SUPPLY_STATUS_CHARGING;
+ /* Validate the status by checking the current. */
+ if (battery->rate_now != ACPI_BATTERY_VALUE_UNKNOWN &&
+ battery->rate_now == 0) {
+ /* On charge but no current (0W/0mA). */
+ val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
+ } else {
+ val->intval = POWER_SUPPLY_STATUS_CHARGING;
+ }
else if (battery->state & ACPI_BATTERY_STATE_CHARGE_LIMITING)
val->intval = POWER_SUPPLY_STATUS_NOT_CHARGING;
else if (acpi_battery_is_charged(battery))
diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
index 1e8e2002f81af..c90121cae628a 100644
--- a/drivers/acpi/cppc_acpi.c
+++ b/drivers/acpi/cppc_acpi.c
@@ -349,7 +349,7 @@ static int send_pcc_cmd(int pcc_ss_id, u16 cmd)
end:
if (cmd == CMD_WRITE) {
if (unlikely(ret)) {
- for_each_possible_cpu(i) {
+ for_each_online_cpu(i) {
struct cpc_desc *desc = per_cpu(cpc_desc_ptr, i);

if (!desc)
@@ -511,7 +511,7 @@ int acpi_get_psd_map(unsigned int cpu, struct cppc_cpudata *cpu_data)
else if (pdomain->coord_type == DOMAIN_COORD_TYPE_SW_ANY)
cpu_data->shared_type = CPUFREQ_SHARED_TYPE_ANY;

- for_each_possible_cpu(i) {
+ for_each_online_cpu(i) {
if (i == cpu)
continue;

diff --git a/drivers/acpi/power.c b/drivers/acpi/power.c
index c2c70139c4f1d..ff5fcd541e50f 100644
--- a/drivers/acpi/power.c
+++ b/drivers/acpi/power.c
@@ -1035,6 +1035,19 @@ static const struct dmi_system_id dmi_leave_unused_power_resources_on[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "SATELLITE Click Mini L9W-B"),
},
},
+ {
+ /*
+ * THUNDEROBOT ZERO laptop: Due to its SSDT table bug, power
+ * resource 'PXP' will be shut down on initialization, making
+ * the NVMe #2 and the NVIDIA dGPU both unavailable (they're
+ * both controlled by 'PXP').
+ */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "THUNDEROBOT"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "ZERO"),
+ }
+
+ },
{}
};

diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
index 4937032490689..8a01a3718aa94 100644
--- a/drivers/acpi/resource.c
+++ b/drivers/acpi/resource.c
@@ -531,6 +531,12 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
DMI_MATCH(DMI_BOARD_NAME, "16T90SP"),
},
},
+ {
+ /* JWIPC JVC9100 */
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "JVC9100"),
+ },
+ },
{ }
};

@@ -698,6 +704,8 @@ struct irq_override_cmp {

static const struct irq_override_cmp override_table[] = {
{ irq1_level_low_skip_override, 1, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 0, false },
+ { irq1_level_low_skip_override, 10, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 1, false },
+ { irq1_level_low_skip_override, 11, ACPI_LEVEL_SENSITIVE, ACPI_ACTIVE_LOW, 1, false },
{ irq1_edge_low_force_override, 1, ACPI_EDGE_SENSITIVE, ACPI_ACTIVE_LOW, 1, true },
};

diff --git a/drivers/acpi/x86/s2idle.c b/drivers/acpi/x86/s2idle.c
index dd0b40b9bbe8b..377a268867c21 100644
--- a/drivers/acpi/x86/s2idle.c
+++ b/drivers/acpi/x86/s2idle.c
@@ -45,6 +45,7 @@ static const struct acpi_device_id lps0_device_ids[] = {
#define ACPI_LPS0_EXIT 6
#define ACPI_LPS0_MS_ENTRY 7
#define ACPI_LPS0_MS_EXIT 8
+#define ACPI_MS_TURN_ON_DISPLAY 9

/* AMD */
#define ACPI_LPS0_DSM_UUID_AMD "e3f32452-febc-43ce-9039-932122d37721"
@@ -373,6 +374,8 @@ static const char *acpi_sleep_dsm_state_to_str(unsigned int state)
return "lps0 ms entry";
case ACPI_LPS0_MS_EXIT:
return "lps0 ms exit";
+ case ACPI_MS_TURN_ON_DISPLAY:
+ return "lps0 ms turn on display";
}
} else {
switch (state) {
@@ -619,6 +622,9 @@ void acpi_s2idle_restore_early(void)
if (lps0_dsm_func_mask_microsoft > 0) {
acpi_sleep_run_lps0_dsm(ACPI_LPS0_EXIT,
lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
+ /* Intent to turn on display */
+ acpi_sleep_run_lps0_dsm(ACPI_MS_TURN_ON_DISPLAY,
+ lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
/* Modern Standby exit */
acpi_sleep_run_lps0_dsm(ACPI_LPS0_MS_EXIT,
lps0_dsm_func_mask_microsoft, lps0_dsm_guid_microsoft);
diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
index 4ee30c2897a2b..418951639f511 100644
--- a/drivers/acpi/x86/utils.c
+++ b/drivers/acpi/x86/utils.c
@@ -81,6 +81,18 @@ static const struct override_status_id override_status_ids[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "Mipad2"),
}),

+ /*
+ * Lenovo Yoga Book uses PWM2 for touch keyboard backlight control.
+ * It needs to be enabled only for the Android device version (YB1-X90*
+ * aka YETI-11); the Windows version (YB1-X91*) uses ACPI control
+ * methods.
+ */
+ PRESENT_ENTRY_HID("80862289", "2", INTEL_ATOM_AIRMONT, {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
+ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
+ }),
+
/*
* The INT0002 device is necessary to clear wakeup interrupt sources
* on Cherry Trail devices, without it we get nobody cared IRQ msgs.
diff --git a/drivers/android/binder.c b/drivers/android/binder.c
index 9a6d1822dc3be..c470f5cce9911 100644
--- a/drivers/android/binder.c
+++ b/drivers/android/binder.c
@@ -4444,7 +4444,7 @@ static int binder_thread_write(struct binder_proc *proc,
}
}
binder_debug(BINDER_DEBUG_DEAD_BINDER,
- "%d:%d BC_DEAD_BINDER_DONE %016llx found %pK\n",
+ "%d:%d BC_DEAD_BINDER_DONE %016llx found %p\n",
proc->pid, thread->pid, (u64)cookie,
death);
if (death == NULL) {
diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index b3acbc4174fb1..7556f470f921b 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -79,7 +79,7 @@ static void binder_insert_free_buffer(struct binder_alloc *alloc,
new_buffer_size = binder_alloc_buffer_size(alloc, new_buffer);

binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
- "%d: add free buffer, size %zd, at %pK\n",
+ "%d: add free buffer, size %zd, at %p\n",
alloc->pid, new_buffer_size, new_buffer);

while (*p) {
@@ -493,7 +493,7 @@ static struct binder_buffer *binder_alloc_new_buf_locked(
}

binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
- "%d: binder_alloc_buf size %zd got buffer %pK size %zd\n",
+ "%d: binder_alloc_buf size %zd got buffer %p size %zd\n",
alloc->pid, size, buffer, buffer_size);

/*
@@ -668,7 +668,7 @@ static void binder_free_buf_locked(struct binder_alloc *alloc,
ALIGN(buffer->extra_buffers_size, sizeof(void *));

binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC,
- "%d: binder_free_buf %pK size %zd buffer_size %zd\n",
+ "%d: binder_free_buf %p size %zd buffer_size %zd\n",
alloc->pid, buffer, size, buffer_size);

BUG_ON(buffer->free);
diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index 33454d01c2044..39dcefb1fdd54 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -2329,6 +2329,24 @@ static bool ata_dev_check_adapter(struct ata_device *dev,
return false;
}

+bool ata_adapter_is_online(struct ata_port *ap)
+{
+ struct device *dev;
+
+ if (!ap || !ap->host)
+ return false;
+
+ dev = ap->host->dev;
+ if (!dev)
+ return false;
+
+ if (dev_is_pci(dev) &&
+ pci_channel_offline(to_pci_dev(dev)))
+ return false;
+
+ return true;
+}
+
static int ata_dev_config_ncq(struct ata_device *dev,
char *desc, size_t desc_sz)
{
@@ -5023,6 +5041,12 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
qc->flags |= ATA_QCFLAG_ACTIVE;
ap->qc_active |= 1ULL << qc->tag;

+ /* Make sure the device is still accessible. */
+ if (!ata_adapter_is_online(ap)) {
+ qc->err_mask |= AC_ERR_HOST_BUS;
+ goto sys_err;
+ }
+
/*
* We guarantee to LLDs that they will have at least one
* non-zero sg if the command is a data command.
diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
index 16cd676eae1f9..205c62cf9e32d 100644
--- a/drivers/ata/libata-eh.c
+++ b/drivers/ata/libata-eh.c
@@ -738,7 +738,8 @@ void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap)
spin_unlock_irqrestore(ap->lock, flags);

/* invoke EH, skip if unloading or suspended */
- if (!(ap->pflags & (ATA_PFLAG_UNLOADING | ATA_PFLAG_SUSPENDED)))
+ if (!(ap->pflags & (ATA_PFLAG_UNLOADING | ATA_PFLAG_SUSPENDED)) &&
+ ata_adapter_is_online(ap))
ap->ops->error_handler(ap);
else {
/* if unloading, commence suicide */
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index 74842750b2ed4..097080c8b82df 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -1702,6 +1702,42 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
ata_qc_done(qc);
}

+static int ata_scsi_qc_issue(struct ata_port *ap, struct ata_queued_cmd *qc)
+{
+ int ret;
+
+ if (!ap->ops->qc_defer)
+ goto issue;
+
+ /* Check if the command needs to be deferred. */
+ ret = ap->ops->qc_defer(qc);
+ switch (ret) {
+ case 0:
+ break;
+ case ATA_DEFER_LINK:
+ ret = SCSI_MLQUEUE_DEVICE_BUSY;
+ break;
+ case ATA_DEFER_PORT:
+ ret = SCSI_MLQUEUE_HOST_BUSY;
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ ret = SCSI_MLQUEUE_HOST_BUSY;
+ break;
+ }
+
+ if (ret) {
+ /* Force a requeue of the command to defer its execution. */
+ ata_qc_free(qc);
+ return ret;
+ }
+
+issue:
+ ata_qc_issue(qc);
+
+ return 0;
+}
+
/**
* ata_scsi_translate - Translate then issue SCSI command to ATA device
* @dev: ATA device to which the command is addressed
@@ -1725,66 +1761,49 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
* spin_lock_irqsave(host lock)
*
* RETURNS:
- * 0 on success, SCSI_ML_QUEUE_DEVICE_BUSY if the command
- * needs to be deferred.
+ * 0 on success, SCSI_ML_QUEUE_DEVICE_BUSY or SCSI_MLQUEUE_HOST_BUSY if the
+ * command needs to be deferred.
*/
static int ata_scsi_translate(struct ata_device *dev, struct scsi_cmnd *cmd,
ata_xlat_func_t xlat_func)
{
struct ata_port *ap = dev->link->ap;
struct ata_queued_cmd *qc;
- int rc;

+ lockdep_assert_held(ap->lock);
+
+ /*
+ * ata_scsi_qc_new() calls scsi_done(cmd) in case of failure. So we
+ * have nothing further to do when allocating a qc fails.
+ */
qc = ata_scsi_qc_new(dev, cmd);
if (!qc)
- goto err_mem;
+ return 0;

/* data is present; dma-map it */
if (cmd->sc_data_direction == DMA_FROM_DEVICE ||
cmd->sc_data_direction == DMA_TO_DEVICE) {
if (unlikely(scsi_bufflen(cmd) < 1)) {
ata_dev_warn(dev, "WARNING: zero len r/w req\n");
- goto err_did;
+ cmd->result = (DID_ERROR << 16);
+ goto done;
}

ata_sg_init(qc, scsi_sglist(cmd), scsi_sg_count(cmd));
-
qc->dma_dir = cmd->sc_data_direction;
}

qc->complete_fn = ata_scsi_qc_complete;

if (xlat_func(qc))
- goto early_finish;
-
- if (ap->ops->qc_defer) {
- if ((rc = ap->ops->qc_defer(qc)))
- goto defer;
- }
+ goto done;

- /* select device, send command to hardware */
- ata_qc_issue(qc);
+ return ata_scsi_qc_issue(ap, qc);

- return 0;
-
-early_finish:
+done:
ata_qc_free(qc);
scsi_done(cmd);
return 0;
-
-err_did:
- ata_qc_free(qc);
- cmd->result = (DID_ERROR << 16);
- scsi_done(cmd);
-err_mem:
- return 0;
-
-defer:
- ata_qc_free(qc);
- if (rc == ATA_DEFER_LINK)
- return SCSI_MLQUEUE_DEVICE_BUSY;
- else
- return SCSI_MLQUEUE_HOST_BUSY;
}

struct ata_scsi_args {
@@ -2838,6 +2857,9 @@ ata_scsi_find_dev(struct ata_port *ap, const struct scsi_device *scsidev)
{
struct ata_device *dev = __ata_scsi_find_dev(ap, scsidev);

+ if (!ata_adapter_is_online(ap))
+ return NULL;
+
if (unlikely(!dev || !ata_dev_enabled(dev)))
return NULL;

diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
index 0337be4faec7c..d07693bd054eb 100644
--- a/drivers/ata/libata.h
+++ b/drivers/ata/libata.h
@@ -82,6 +82,7 @@ extern int atapi_check_dma(struct ata_queued_cmd *qc);
extern void swap_buf_le16(u16 *buf, unsigned int buf_words);
extern bool ata_phys_link_online(struct ata_link *link);
extern bool ata_phys_link_offline(struct ata_link *link);
+bool ata_adapter_is_online(struct ata_port *ap);
extern void ata_dev_init(struct ata_device *dev);
extern void ata_link_init(struct ata_port *ap, struct ata_link *link, int pmp);
extern int sata_link_init_spd(struct ata_link *link);
diff --git a/drivers/ata/pata_ftide010.c b/drivers/ata/pata_ftide010.c
index 73a9a5109238b..01f03031315aa 100644
--- a/drivers/ata/pata_ftide010.c
+++ b/drivers/ata/pata_ftide010.c
@@ -122,10 +122,10 @@ static const u8 mwdma_50_active_time[3] = {6, 2, 2};
static const u8 mwdma_50_recovery_time[3] = {6, 2, 1};
static const u8 mwdma_66_active_time[3] = {8, 3, 3};
static const u8 mwdma_66_recovery_time[3] = {8, 2, 1};
-static const u8 udma_50_setup_time[6] = {3, 3, 2, 2, 1, 1};
+static const u8 udma_50_setup_time[6] = {3, 3, 2, 2, 1, 9};
static const u8 udma_50_hold_time[6] = {3, 1, 1, 1, 1, 1};
-static const u8 udma_66_setup_time[7] = {4, 4, 3, 2, };
-static const u8 udma_66_hold_time[7] = {};
+static const u8 udma_66_setup_time[7] = {4, 4, 3, 2, 1, 9, 9};
+static const u8 udma_66_hold_time[7] = {4, 2, 1, 1, 1, 1, 1};

/*
* We set 66 MHz for all MWDMA modes
diff --git a/drivers/atm/fore200e.c b/drivers/atm/fore200e.c
index e815d58f7c955..c18a88ccd8cc3 100644
--- a/drivers/atm/fore200e.c
+++ b/drivers/atm/fore200e.c
@@ -373,6 +373,10 @@ fore200e_shutdown(struct fore200e* fore200e)
fallthrough;
case FORE200E_STATE_IRQ:
free_irq(fore200e->irq, fore200e->atm_dev);
+#ifdef FORE200E_USE_TASKLET
+ tasklet_kill(&fore200e->tx_tasklet);
+ tasklet_kill(&fore200e->rx_tasklet);
+#endif

fallthrough;
case FORE200E_STATE_ALLOC_BUF:
diff --git a/drivers/auxdisplay/arm-charlcd.c b/drivers/auxdisplay/arm-charlcd.c
index a7eae99a48f77..4e22882f57c9c 100644
--- a/drivers/auxdisplay/arm-charlcd.c
+++ b/drivers/auxdisplay/arm-charlcd.c
@@ -323,7 +323,7 @@ static int __init charlcd_probe(struct platform_device *pdev)
out_no_irq:
iounmap(lcd->virtbase);
out_no_memregion:
- release_mem_region(lcd->phybase, SZ_4K);
+ release_mem_region(lcd->phybase, lcd->physize);
out_no_resource:
kfree(lcd);
return ret;
diff --git a/drivers/base/power/wakeirq.c b/drivers/base/power/wakeirq.c
index 5a5a9e978e85f..ddbe9cc91d23d 100644
--- a/drivers/base/power/wakeirq.c
+++ b/drivers/base/power/wakeirq.c
@@ -83,13 +83,16 @@ EXPORT_SYMBOL_GPL(dev_pm_set_wake_irq);
*/
void dev_pm_clear_wake_irq(struct device *dev)
{
- struct wake_irq *wirq = dev->power.wakeirq;
+ struct wake_irq *wirq;
unsigned long flags;

- if (!wirq)
+ spin_lock_irqsave(&dev->power.lock, flags);
+ wirq = dev->power.wakeirq;
+ if (!wirq) {
+ spin_unlock_irqrestore(&dev->power.lock, flags);
return;
+ }

- spin_lock_irqsave(&dev->power.lock, flags);
device_wakeup_detach_irq(dev);
dev->power.wakeirq = NULL;
spin_unlock_irqrestore(&dev->power.lock, flags);
diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
index 752b417e81290..706cd556a0d69 100644
--- a/drivers/base/power/wakeup.c
+++ b/drivers/base/power/wakeup.c
@@ -280,9 +280,7 @@ EXPORT_SYMBOL_GPL(wakeup_sources_read_unlock);
*/
struct wakeup_source *wakeup_sources_walk_start(void)
{
- struct list_head *ws_head = &wakeup_sources;
-
- return list_entry_rcu(ws_head->next, struct wakeup_source, entry);
+ return list_first_or_null_rcu(&wakeup_sources, struct wakeup_source, entry);
}
EXPORT_SYMBOL_GPL(wakeup_sources_walk_start);

diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 5bbd312c3e14d..8c5a7bcfa82b2 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2683,9 +2683,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
* connect.
*/
.max_hw_sectors = DRBD_MAX_BIO_SIZE_SAFE >> 8,
- .features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
- BLK_FEAT_ROTATIONAL |
- BLK_FEAT_STABLE_WRITES,
};

device = minor_to_device(minor);
diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index 720fc30e2ecc9..8c12bf1b2a0d2 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -1296,6 +1296,8 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device,
lim.max_segments = drbd_backing_dev_max_segments(device);
} else {
lim.max_segments = BLK_MAX_SEGMENTS;
+ lim.features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
+ BLK_FEAT_ROTATIONAL | BLK_FEAT_STABLE_WRITES;
}

lim.max_hw_sectors = new >> SECTOR_SHIFT;
@@ -1318,8 +1320,24 @@ void drbd_reconsider_queue_parameters(struct drbd_device *device,
lim.max_hw_discard_sectors = 0;
}

- if (bdev)
+ if (bdev) {
blk_stack_limits(&lim, &b->limits, 0);
+ /*
+ * blk_set_stacking_limits() cleared the features, and
+ * blk_stack_limits() may or may not have inherited
+ * BLK_FEAT_STABLE_WRITES from the backing device.
+ *
+ * DRBD always requires stable writes because:
+ * 1. The same bio data is read for both local disk I/O and
+ * network transmission. If the page changes mid-flight,
+ * the local and remote copies could diverge.
+ * 2. When data integrity is enabled, DRBD calculates a
+ * checksum before sending the data. If the page changes
+ * between checksum calculation and transmission, the
+ * receiver will detect a checksum mismatch.
+ */
+ lim.features |= BLK_FEAT_STABLE_WRITES;
+ }

/*
* If we can handle "zeroes" efficiently on the protocol, we want to do
diff --git a/drivers/block/rnbd/rnbd-srv.c b/drivers/block/rnbd/rnbd-srv.c
index 08ce6d96d04cf..0c7d40ed8d199 100644
--- a/drivers/block/rnbd/rnbd-srv.c
+++ b/drivers/block/rnbd/rnbd-srv.c
@@ -145,23 +145,30 @@ static int process_rdma(struct rnbd_srv_session *srv_sess,
priv->sess_dev = sess_dev;
priv->id = id;

- bio = bio_alloc(file_bdev(sess_dev->bdev_file), 1,
+ bio = bio_alloc(file_bdev(sess_dev->bdev_file), !!datalen,
rnbd_to_bio_flags(le32_to_cpu(msg->rw)), GFP_KERNEL);
- if (bio_add_page(bio, virt_to_page(data), datalen,
- offset_in_page(data)) != datalen) {
- rnbd_srv_err_rl(sess_dev, "Failed to map data to bio\n");
- err = -EINVAL;
- goto bio_put;
+ if (unlikely(!bio)) {
+ err = -ENOMEM;
+ goto put_sess_dev;
}

- bio->bi_opf = rnbd_to_bio_flags(le32_to_cpu(msg->rw));
- if (bio_has_data(bio) &&
- bio->bi_iter.bi_size != le32_to_cpu(msg->bi_size)) {
- rnbd_srv_err_rl(sess_dev, "Datalen mismatch: bio bi_size (%u), bi_size (%u)\n",
- bio->bi_iter.bi_size, msg->bi_size);
- err = -EINVAL;
- goto bio_put;
+ if (!datalen) {
+ /*
+ * For special requests like DISCARD and WRITE_ZEROES, the datalen is zero.
+ */
+ bio->bi_iter.bi_size = le32_to_cpu(msg->bi_size);
+ } else {
+ bio_add_virt_nofail(bio, data, datalen);
+ bio->bi_opf = rnbd_to_bio_flags(le32_to_cpu(msg->rw));
+ if (bio->bi_iter.bi_size != le32_to_cpu(msg->bi_size)) {
+ rnbd_srv_err_rl(sess_dev,
+ "Datalen mismatch: bio bi_size (%u), bi_size (%u)\n",
+ bio->bi_iter.bi_size, msg->bi_size);
+ err = -EINVAL;
+ goto bio_put;
+ }
}
+
bio->bi_end_io = rnbd_dev_bi_end_io;
bio->bi_private = priv;
bio->bi_iter.bi_sector = le64_to_cpu(msg->sector);
@@ -175,6 +182,7 @@ static int process_rdma(struct rnbd_srv_session *srv_sess,

bio_put:
bio_put(bio);
+put_sess_dev:
rnbd_put_sess_dev(sess_dev);
err:
kfree(priv);
@@ -543,6 +551,8 @@ static void rnbd_srv_fill_msg_open_rsp(struct rnbd_msg_open_rsp *rsp,
{
struct block_device *bdev = file_bdev(sess_dev->bdev_file);

+ memset(rsp, 0, sizeof(*rsp));
+
rsp->hdr.type = cpu_to_le16(RNBD_MSG_OPEN_RSP);
rsp->device_id = cpu_to_le32(sess_dev->device_id);
rsp->nsectors = cpu_to_le64(bdev_nr_sectors(bdev));
@@ -649,6 +659,7 @@ static void process_msg_sess_info(struct rnbd_srv_session *srv_sess,

trace_process_msg_sess_info(srv_sess, sess_info_msg);

+ memset(rsp, 0, sizeof(*rsp));
rsp->hdr.type = cpu_to_le16(RNBD_MSG_SESS_INFO_RSP);
rsp->ver = srv_sess->ver;
}
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 2d46383e8d26b..c6a59f02944fc 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -3026,10 +3026,10 @@ static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd,
if (issue_flags & IO_URING_F_NONBLOCK)
return -EAGAIN;

- ublk_ctrl_cmd_dump(cmd);
-
if (!(issue_flags & IO_URING_F_SQE128))
- goto out;
+ return -EINVAL;
+
+ ublk_ctrl_cmd_dump(cmd);

ret = ublk_check_cmd_op(cmd_op);
if (ret)
diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
index 34812bf7587d6..d430645657e3d 100644
--- a/drivers/bluetooth/btintel_pcie.c
+++ b/drivers/bluetooth/btintel_pcie.c
@@ -798,11 +798,6 @@ static void btintel_pcie_msix_rx_handle(struct btintel_pcie_data *data)
}
}

-static irqreturn_t btintel_pcie_msix_isr(int irq, void *data)
-{
- return IRQ_WAKE_THREAD;
-}
-
static inline bool btintel_pcie_is_rxq_empty(struct btintel_pcie_data *data)
{
return data->ia.cr_hia[BTINTEL_PCIE_RXQ_NUM] == data->ia.cr_tia[BTINTEL_PCIE_RXQ_NUM];
@@ -896,9 +891,9 @@ static int btintel_pcie_setup_irq(struct btintel_pcie_data *data)

err = devm_request_threaded_irq(&data->pdev->dev,
msix_entry->vector,
- btintel_pcie_msix_isr,
+ NULL,
btintel_pcie_irq_msix_handler,
- IRQF_SHARED,
+ IRQF_ONESHOT | IRQF_SHARED,
KBUILD_MODNAME,
msix_entry);
if (err) {
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 490efe4d67a97..9d3236321056b 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -551,6 +551,8 @@ static const struct usb_device_id quirks_table[] = {
BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x13d3, 0x3592), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3612), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x0489, 0xe122), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },

@@ -625,6 +627,8 @@ static const struct usb_device_id quirks_table[] = {
BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x13d3, 0x3622), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe158), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },

/* Additional MediaTek MT7921 Bluetooth devices */
{ USB_DEVICE(0x0489, 0xe0c8), .driver_info = BTUSB_MEDIATEK |
@@ -753,6 +757,7 @@ static const struct usb_device_id quirks_table[] = {

/* Additional Realtek 8723BU Bluetooth devices */
{ USB_DEVICE(0x7392, 0xa611), .driver_info = BTUSB_REALTEK },
+ { USB_DEVICE(0x2c0a, 0x8761), .driver_info = BTUSB_REALTEK },

/* Additional Realtek 8723DE Bluetooth devices */
{ USB_DEVICE(0x0bda, 0xb009), .driver_info = BTUSB_REALTEK },
diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
index e6ad01d5e1d5d..586a2ff72a2ae 100644
--- a/drivers/bluetooth/hci_qca.c
+++ b/drivers/bluetooth/hci_qca.c
@@ -1987,19 +1987,23 @@ static int qca_setup(struct hci_uart *hu)
}

out:
- if (ret && retries < MAX_INIT_RETRIES) {
- bt_dev_warn(hdev, "Retry BT power ON:%d", retries);
+ if (ret) {
qca_power_shutdown(hu);
- if (hu->serdev) {
- serdev_device_close(hu->serdev);
- ret = serdev_device_open(hu->serdev);
- if (ret) {
- bt_dev_err(hdev, "failed to open port");
- return ret;
+
+ if (retries < MAX_INIT_RETRIES) {
+ bt_dev_warn(hdev, "Retry BT power ON:%d", retries);
+ if (hu->serdev) {
+ serdev_device_close(hu->serdev);
+ ret = serdev_device_open(hu->serdev);
+ if (ret) {
+ bt_dev_err(hdev, "failed to open port");
+ return ret;
+ }
}
+ retries++;
+ goto retry;
}
- retries++;
- goto retry;
+ return ret;
}

/* Setup bdaddr */
diff --git a/drivers/bus/fsl-mc/fsl-mc-bus.c b/drivers/bus/fsl-mc/fsl-mc-bus.c
index bdf78260929d9..5543ba93e5017 100644
--- a/drivers/bus/fsl-mc/fsl-mc-bus.c
+++ b/drivers/bus/fsl-mc/fsl-mc-bus.c
@@ -908,11 +908,7 @@ int fsl_mc_device_add(struct fsl_mc_obj_desc *obj_desc,
return 0;

error_cleanup_dev:
- kfree(mc_dev->regions);
- if (mc_bus)
- kfree(mc_bus);
- else
- kfree(mc_dev);
+ put_device(&mc_dev->dev);

return error;
}
diff --git a/drivers/bus/omap-ocp2scp.c b/drivers/bus/omap-ocp2scp.c
index 7d7479ba0a759..87e290a3dc817 100644
--- a/drivers/bus/omap-ocp2scp.c
+++ b/drivers/bus/omap-ocp2scp.c
@@ -17,15 +17,6 @@
#define OCP2SCP_TIMING 0x18
#define SYNC2_MASK 0xf

-static int ocp2scp_remove_devices(struct device *dev, void *c)
-{
- struct platform_device *pdev = to_platform_device(dev);
-
- platform_device_unregister(pdev);
-
- return 0;
-}
-
static int omap_ocp2scp_probe(struct platform_device *pdev)
{
int ret;
@@ -79,7 +70,7 @@ static int omap_ocp2scp_probe(struct platform_device *pdev)
pm_runtime_disable(&pdev->dev);

err0:
- device_for_each_child(&pdev->dev, NULL, ocp2scp_remove_devices);
+ of_platform_depopulate(&pdev->dev);

return ret;
}
@@ -87,7 +78,7 @@ static int omap_ocp2scp_probe(struct platform_device *pdev)
static void omap_ocp2scp_remove(struct platform_device *pdev)
{
pm_runtime_disable(&pdev->dev);
- device_for_each_child(&pdev->dev, NULL, ocp2scp_remove_devices);
+ of_platform_depopulate(&pdev->dev);
}

#ifdef CONFIG_OF
diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index 57c51efa56131..c6e913497e1df 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -20,23 +20,25 @@
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/random.h>
+#include <linux/rcupdate.h>
#include <linux/sched.h>
#include <linux/sched/signal.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/uaccess.h>
+#include <linux/workqueue.h>

#define RNG_MODULE_NAME "hw_random"

#define RNG_BUFFER_SIZE (SMP_CACHE_BYTES < 32 ? 32 : SMP_CACHE_BYTES)

-static struct hwrng *current_rng;
+static struct hwrng __rcu *current_rng;
/* the current rng has been explicitly chosen by user via sysfs */
static int cur_rng_set_by_user;
static struct task_struct *hwrng_fill;
/* list of registered rngs */
static LIST_HEAD(rng_list);
-/* Protects rng_list and current_rng */
+/* Protects rng_list, hwrng_fill and updating on current_rng */
static DEFINE_MUTEX(rng_mutex);
/* Protects rng read functions, data_avail, rng_buffer and rng_fillbuf */
static DEFINE_MUTEX(reading_mutex);
@@ -64,18 +66,39 @@ static size_t rng_buffer_size(void)
return RNG_BUFFER_SIZE;
}

-static inline void cleanup_rng(struct kref *kref)
+static void cleanup_rng_work(struct work_struct *work)
{
- struct hwrng *rng = container_of(kref, struct hwrng, ref);
+ struct hwrng *rng = container_of(work, struct hwrng, cleanup_work);
+
+ /*
+ * Hold rng_mutex here so we serialize in case they set_current_rng
+ * on rng again immediately.
+ */
+ mutex_lock(&rng_mutex);
+
+ /* Skip if rng has been reinitialized. */
+ if (kref_read(&rng->ref)) {
+ mutex_unlock(&rng_mutex);
+ return;
+ }

if (rng->cleanup)
rng->cleanup(rng);

complete(&rng->cleanup_done);
+ mutex_unlock(&rng_mutex);
+}
+
+static inline void cleanup_rng(struct kref *kref)
+{
+ struct hwrng *rng = container_of(kref, struct hwrng, ref);
+
+ schedule_work(&rng->cleanup_work);
}

static int set_current_rng(struct hwrng *rng)
{
+ struct hwrng *old_rng;
int err;

BUG_ON(!mutex_is_locked(&rng_mutex));
@@ -84,8 +107,14 @@ static int set_current_rng(struct hwrng *rng)
if (err)
return err;

- drop_current_rng();
- current_rng = rng;
+ old_rng = rcu_dereference_protected(current_rng,
+ lockdep_is_held(&rng_mutex));
+ rcu_assign_pointer(current_rng, rng);
+
+ if (old_rng) {
+ synchronize_rcu();
+ kref_put(&old_rng->ref, cleanup_rng);
+ }

/* if necessary, start hwrng thread */
if (!hwrng_fill) {
@@ -101,47 +130,56 @@ static int set_current_rng(struct hwrng *rng)

static void drop_current_rng(void)
{
- BUG_ON(!mutex_is_locked(&rng_mutex));
- if (!current_rng)
+ struct hwrng *rng;
+
+ rng = rcu_dereference_protected(current_rng,
+ lockdep_is_held(&rng_mutex));
+ if (!rng)
return;

+ RCU_INIT_POINTER(current_rng, NULL);
+ synchronize_rcu();
+
+ if (hwrng_fill) {
+ kthread_stop(hwrng_fill);
+ hwrng_fill = NULL;
+ }
+
/* decrease last reference for triggering the cleanup */
- kref_put(&current_rng->ref, cleanup_rng);
- current_rng = NULL;
+ kref_put(&rng->ref, cleanup_rng);
}

-/* Returns ERR_PTR(), NULL or refcounted hwrng */
+/* Returns NULL or refcounted hwrng */
static struct hwrng *get_current_rng_nolock(void)
{
- if (current_rng)
- kref_get(&current_rng->ref);
+ struct hwrng *rng;
+
+ rng = rcu_dereference_protected(current_rng,
+ lockdep_is_held(&rng_mutex));
+ if (rng)
+ kref_get(&rng->ref);

- return current_rng;
+ return rng;
}

static struct hwrng *get_current_rng(void)
{
struct hwrng *rng;

- if (mutex_lock_interruptible(&rng_mutex))
- return ERR_PTR(-ERESTARTSYS);
+ rcu_read_lock();
+ rng = rcu_dereference(current_rng);
+ if (rng)
+ kref_get(&rng->ref);

- rng = get_current_rng_nolock();
+ rcu_read_unlock();

- mutex_unlock(&rng_mutex);
return rng;
}

static void put_rng(struct hwrng *rng)
{
- /*
- * Hold rng_mutex here so we serialize in case they set_current_rng
- * on rng again immediately.
- */
- mutex_lock(&rng_mutex);
if (rng)
kref_put(&rng->ref, cleanup_rng);
- mutex_unlock(&rng_mutex);
}

static int hwrng_init(struct hwrng *rng)
@@ -206,10 +244,6 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,

while (size) {
rng = get_current_rng();
- if (IS_ERR(rng)) {
- err = PTR_ERR(rng);
- goto out;
- }
if (!rng) {
err = -ENODEV;
goto out;
@@ -296,7 +330,7 @@ static struct miscdevice rng_miscdev = {

static int enable_best_rng(void)
{
- struct hwrng *rng, *new_rng = NULL;
+ struct hwrng *rng, *cur_rng, *new_rng = NULL;
int ret = -ENODEV;

BUG_ON(!mutex_is_locked(&rng_mutex));
@@ -314,7 +348,9 @@ static int enable_best_rng(void)
new_rng = rng;
}

- ret = ((new_rng == current_rng) ? 0 : set_current_rng(new_rng));
+ cur_rng = rcu_dereference_protected(current_rng,
+ lockdep_is_held(&rng_mutex));
+ ret = ((new_rng == cur_rng) ? 0 : set_current_rng(new_rng));
if (!ret)
cur_rng_set_by_user = 0;

@@ -334,6 +370,9 @@ static ssize_t rng_current_store(struct device *dev,

if (sysfs_streq(buf, "")) {
err = enable_best_rng();
+ } else if (sysfs_streq(buf, "none")) {
+ cur_rng_set_by_user = 1;
+ drop_current_rng();
} else {
list_for_each_entry(rng, &rng_list, list) {
if (sysfs_streq(rng->name, buf)) {
@@ -361,8 +400,6 @@ static ssize_t rng_current_show(struct device *dev,
struct hwrng *rng;

rng = get_current_rng();
- if (IS_ERR(rng))
- return PTR_ERR(rng);

ret = sysfs_emit(buf, "%s\n", rng ? rng->name : "none");
put_rng(rng);
@@ -385,7 +422,7 @@ static ssize_t rng_available_show(struct device *dev,
strlcat(buf, rng->name, PAGE_SIZE);
strlcat(buf, " ", PAGE_SIZE);
}
- strlcat(buf, "\n", PAGE_SIZE);
+ strlcat(buf, "none\n", PAGE_SIZE);
mutex_unlock(&rng_mutex);

return strlen(buf);
@@ -406,8 +443,6 @@ static ssize_t rng_quality_show(struct device *dev,
struct hwrng *rng;

rng = get_current_rng();
- if (IS_ERR(rng))
- return PTR_ERR(rng);

if (!rng) /* no need to put_rng */
return -ENODEV;
@@ -422,6 +457,7 @@ static ssize_t rng_quality_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t len)
{
+ struct hwrng *rng;
u16 quality;
int ret = -EINVAL;

@@ -438,12 +474,13 @@ static ssize_t rng_quality_store(struct device *dev,
goto out;
}

- if (!current_rng) {
+ rng = rcu_dereference_protected(current_rng, lockdep_is_held(&rng_mutex));
+ if (!rng) {
ret = -ENODEV;
goto out;
}

- current_rng->quality = quality;
+ rng->quality = quality;
current_quality = quality; /* obsolete */

/* the best available RNG may have changed */
@@ -479,8 +516,20 @@ static int hwrng_fillfn(void *unused)
struct hwrng *rng;

rng = get_current_rng();
- if (IS_ERR(rng) || !rng)
+ if (!rng) {
+ /*
+ * Keep the task_struct alive until kthread_stop()
+ * is called to avoid UAF in drop_current_rng().
+ */
+ while (!kthread_should_stop()) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (!kthread_should_stop())
+ schedule();
+ }
+ set_current_state(TASK_RUNNING);
break;
+ }
+
mutex_lock(&reading_mutex);
rc = rng_get_data(rng, rng_fillbuf,
rng_buffer_size(), 1);
@@ -508,14 +557,13 @@ static int hwrng_fillfn(void *unused)
add_hwgenerator_randomness((void *)rng_fillbuf, rc,
entropy >> 10, true);
}
- hwrng_fill = NULL;
return 0;
}

int hwrng_register(struct hwrng *rng)
{
int err = -EINVAL;
- struct hwrng *tmp;
+ struct hwrng *cur_rng, *tmp;

if (!rng->name || (!rng->data_read && !rng->read))
goto out;
@@ -530,6 +578,7 @@ int hwrng_register(struct hwrng *rng)
}
list_add_tail(&rng->list, &rng_list);

+ INIT_WORK(&rng->cleanup_work, cleanup_rng_work);
init_completion(&rng->cleanup_done);
complete(&rng->cleanup_done);
init_completion(&rng->dying);
@@ -537,16 +586,19 @@ int hwrng_register(struct hwrng *rng)
/* Adjust quality field to always have a proper value */
rng->quality = min_t(u16, min_t(u16, default_quality, 1024), rng->quality ?: 1024);

- if (!current_rng ||
- (!cur_rng_set_by_user && rng->quality > current_rng->quality)) {
- /*
- * Set new rng as current as the new rng source
- * provides better entropy quality and was not
- * chosen by userspace.
- */
- err = set_current_rng(rng);
- if (err)
- goto out_unlock;
+ if (!cur_rng_set_by_user) {
+ cur_rng = rcu_dereference_protected(current_rng,
+ lockdep_is_held(&rng_mutex));
+ if (!cur_rng || rng->quality > cur_rng->quality) {
+ /*
+ * Set new rng as current as the new rng source
+ * provides better entropy quality and was not
+ * chosen by userspace.
+ */
+ err = set_current_rng(rng);
+ if (err)
+ goto out_unlock;
+ }
}
mutex_unlock(&rng_mutex);
return 0;
@@ -559,14 +611,17 @@ EXPORT_SYMBOL_GPL(hwrng_register);

void hwrng_unregister(struct hwrng *rng)
{
- struct hwrng *new_rng;
+ struct hwrng *cur_rng;
int err;

mutex_lock(&rng_mutex);

list_del(&rng->list);
complete_all(&rng->dying);
- if (current_rng == rng) {
+
+ cur_rng = rcu_dereference_protected(current_rng,
+ lockdep_is_held(&rng_mutex));
+ if (cur_rng == rng) {
err = enable_best_rng();
if (err) {
drop_current_rng();
@@ -574,17 +629,7 @@ void hwrng_unregister(struct hwrng *rng)
}
}

- new_rng = get_current_rng_nolock();
- if (list_empty(&rng_list)) {
- mutex_unlock(&rng_mutex);
- if (hwrng_fill)
- kthread_stop(hwrng_fill);
- } else
- mutex_unlock(&rng_mutex);
-
- if (new_rng)
- put_rng(new_rng);
-
+ mutex_unlock(&rng_mutex);
wait_for_completion(&rng->cleanup_done);
}
EXPORT_SYMBOL_GPL(hwrng_unregister);
@@ -672,7 +717,7 @@ static int __init hwrng_modinit(void)
static void __exit hwrng_modexit(void)
{
mutex_lock(&rng_mutex);
- BUG_ON(current_rng);
+ WARN_ON(rcu_access_pointer(current_rng));
kfree(rng_buffer);
kfree(rng_fillbuf);
mutex_unlock(&rng_mutex);
diff --git a/drivers/char/ipmi/ipmi_ipmb.c b/drivers/char/ipmi/ipmi_ipmb.c
index 6a4f279c7c1f5..163ec867d68af 100644
--- a/drivers/char/ipmi/ipmi_ipmb.c
+++ b/drivers/char/ipmi/ipmi_ipmb.c
@@ -202,11 +202,16 @@ static int ipmi_ipmb_slave_cb(struct i2c_client *client,
break;

case I2C_SLAVE_READ_REQUESTED:
+ *val = 0xff;
+ ipmi_ipmb_check_msg_done(iidev);
+ break;
+
case I2C_SLAVE_STOP:
ipmi_ipmb_check_msg_done(iidev);
break;

case I2C_SLAVE_READ_PROCESSED:
+ *val = 0xff;
break;
}

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 23ee76bbb4aa7..a23c8b2feab6c 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -96,8 +96,7 @@ static ATOMIC_NOTIFIER_HEAD(random_ready_notifier);
/* Control how we warn userspace. */
static struct ratelimit_state urandom_warning =
RATELIMIT_STATE_INIT_FLAGS("urandom_warning", HZ, 3, RATELIMIT_MSG_ON_RELEASE);
-static int ratelimit_disable __read_mostly =
- IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM);
+static int ratelimit_disable __read_mostly = 0;
module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);
MODULE_PARM_DESC(ratelimit_disable, "Disable random ratelimit suppression");

@@ -168,12 +167,6 @@ int __cold execute_with_initialized_rng(struct notifier_block *nb)
return ret;
}

-#define warn_unseeded_randomness() \
- if (IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM) && !crng_ready()) \
- printk_deferred(KERN_NOTICE "random: %s called from %pS with crng_init=%d\n", \
- __func__, (void *)_RET_IP_, crng_init)
-
-
/*********************************************************************
*
* Fast key erasure RNG, the "crng".
@@ -434,7 +427,6 @@ static void _get_random_bytes(void *buf, size_t len)
*/
void get_random_bytes(void *buf, size_t len)
{
- warn_unseeded_randomness();
_get_random_bytes(buf, len);
}
EXPORT_SYMBOL(get_random_bytes);
@@ -522,8 +514,6 @@ type get_random_ ##type(void) \
struct batch_ ##type *batch; \
unsigned long next_gen; \
\
- warn_unseeded_randomness(); \
- \
if (!crng_ready()) { \
_get_random_bytes(&ret, sizeof(ret)); \
return ret; \
diff --git a/drivers/char/tpm/st33zp24/st33zp24.c b/drivers/char/tpm/st33zp24/st33zp24.c
index c0771980bc2ff..06caf53a42ee5 100644
--- a/drivers/char/tpm/st33zp24/st33zp24.c
+++ b/drivers/char/tpm/st33zp24/st33zp24.c
@@ -328,8 +328,10 @@ static int st33zp24_send(struct tpm_chip *chip, unsigned char *buf,

for (i = 0; i < len - 1;) {
burstcnt = get_burstcount(chip);
- if (burstcnt < 0)
- return burstcnt;
+ if (burstcnt < 0) {
+ ret = burstcnt;
+ goto out_err;
+ }
size = min_t(int, len - i - 1, burstcnt);
ret = tpm_dev->ops->send(tpm_dev->phy_id, TPM_DATA_FIFO,
buf + i, size);
diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
index 81d8a78dc6552..3675faa4a00c7 100644
--- a/drivers/char/tpm/tpm_i2c_infineon.c
+++ b/drivers/char/tpm/tpm_i2c_infineon.c
@@ -543,8 +543,10 @@ static int tpm_tis_i2c_send(struct tpm_chip *chip, u8 *buf, size_t len)
burstcnt = get_burstcount(chip);

/* burstcnt < 0 = TPM is busy */
- if (burstcnt < 0)
- return burstcnt;
+ if (burstcnt < 0) {
+ rc = burstcnt;
+ goto out_err;
+ }

if (burstcnt > (len - 1 - count))
burstcnt = len - 1 - count;
diff --git a/drivers/char/tpm/tpm_tis_i2c_cr50.c b/drivers/char/tpm/tpm_tis_i2c_cr50.c
index adf22992138e5..d51ef9c880832 100644
--- a/drivers/char/tpm/tpm_tis_i2c_cr50.c
+++ b/drivers/char/tpm/tpm_tis_i2c_cr50.c
@@ -712,8 +712,7 @@ static int tpm_cr50_i2c_probe(struct i2c_client *client)

if (client->irq > 0) {
rc = devm_request_irq(dev, client->irq, tpm_cr50_i2c_int_handler,
- IRQF_TRIGGER_FALLING | IRQF_ONESHOT |
- IRQF_NO_AUTOEN,
+ IRQF_TRIGGER_FALLING | IRQF_NO_AUTOEN,
dev->driver->name, chip);
if (rc < 0) {
dev_err(dev, "Failed to probe IRQ %d\n", client->irq);
diff --git a/drivers/char/tpm/tpm_tis_spi_cr50.c b/drivers/char/tpm/tpm_tis_spi_cr50.c
index f4937280e9406..32920b4cecfb4 100644
--- a/drivers/char/tpm/tpm_tis_spi_cr50.c
+++ b/drivers/char/tpm/tpm_tis_spi_cr50.c
@@ -287,7 +287,7 @@ int cr50_spi_probe(struct spi_device *spi)
if (spi->irq > 0) {
ret = devm_request_irq(&spi->dev, spi->irq,
cr50_spi_irq_handler,
- IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+ IRQF_TRIGGER_RISING,
"cr50_spi", cr50_phy);
if (ret < 0) {
if (ret == -EPROBE_DEFER)
diff --git a/drivers/clk/clk-apple-nco.c b/drivers/clk/clk-apple-nco.c
index 457a48d489412..c205b7f1dadeb 100644
--- a/drivers/clk/clk-apple-nco.c
+++ b/drivers/clk/clk-apple-nco.c
@@ -318,6 +318,7 @@ static int applnco_probe(struct platform_device *pdev)
}

static const struct of_device_id applnco_ids[] = {
+ { .compatible = "apple,t8103-nco" },
{ .compatible = "apple,nco" },
{ }
};
diff --git a/drivers/clk/clk-renesas-pcie.c b/drivers/clk/clk-renesas-pcie.c
index 4c3a5e4eb77ac..f94a9c4d0b670 100644
--- a/drivers/clk/clk-renesas-pcie.c
+++ b/drivers/clk/clk-renesas-pcie.c
@@ -64,7 +64,7 @@ struct rs9_driver_data {
struct i2c_client *client;
struct regmap *regmap;
const struct rs9_chip_info *chip_info;
- struct clk_hw *clk_dif[4];
+ struct clk_hw *clk_dif[8];
u8 pll_amplitude;
u8 pll_ssc;
u8 clk_dif_sr;
diff --git a/drivers/clk/mediatek/clk-mtk.c b/drivers/clk/mediatek/clk-mtk.c
index ba1d1c495bc2b..644e5a854f2b6 100644
--- a/drivers/clk/mediatek/clk-mtk.c
+++ b/drivers/clk/mediatek/clk-mtk.c
@@ -497,14 +497,16 @@ static int __mtk_clk_simple_probe(struct platform_device *pdev,


if (mcd->need_runtime_pm) {
- devm_pm_runtime_enable(&pdev->dev);
+ r = devm_pm_runtime_enable(&pdev->dev);
+ if (r)
+ goto unmap_io;
/*
* Do a pm_runtime_resume_and_get() to workaround a possible
* deadlock between clk_register() and the genpd framework.
*/
r = pm_runtime_resume_and_get(&pdev->dev);
if (r)
- return r;
+ goto unmap_io;
}

/* Calculate how many clk_hw_onecell_data entries to allocate */
@@ -618,11 +620,11 @@ static int __mtk_clk_simple_probe(struct platform_device *pdev,
free_data:
mtk_free_clk_data(clk_data);
free_base:
- if (mcd->shared_io && base)
- iounmap(base);
-
if (mcd->need_runtime_pm)
pm_runtime_put(&pdev->dev);
+unmap_io:
+ if (mcd->shared_io && base)
+ iounmap(base);
return r;
}

diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
index d9529de200ae4..2bfb0ab9c93cc 100644
--- a/drivers/clk/meson/gxbb.c
+++ b/drivers/clk/meson/gxbb.c
@@ -318,12 +318,23 @@ static struct clk_regmap gxbb_hdmi_pll = {
},
};

+/*
+ * GXL hdmi OD dividers are POWER_OF_TWO dividers but limited to /4.
+ * A divider value of 3 should map to /8 but instead map /4 so ignore it.
+ */
+static const struct clk_div_table gxl_hdmi_pll_od_div_table[] = {
+ { .val = 0, .div = 1 },
+ { .val = 1, .div = 2 },
+ { .val = 2, .div = 4 },
+ { /* sentinel */ }
+};
+
static struct clk_regmap gxl_hdmi_pll_od = {
.data = &(struct clk_regmap_div_data){
.offset = HHI_HDMI_PLL_CNTL + 8,
.shift = 21,
.width = 2,
- .flags = CLK_DIVIDER_POWER_OF_TWO,
+ .table = gxl_hdmi_pll_od_div_table,
},
.hw.init = &(struct clk_init_data){
.name = "hdmi_pll_od",
@@ -341,7 +352,7 @@ static struct clk_regmap gxl_hdmi_pll_od2 = {
.offset = HHI_HDMI_PLL_CNTL + 8,
.shift = 23,
.width = 2,
- .flags = CLK_DIVIDER_POWER_OF_TWO,
+ .table = gxl_hdmi_pll_od_div_table,
},
.hw.init = &(struct clk_init_data){
.name = "hdmi_pll_od2",
@@ -359,7 +370,7 @@ static struct clk_regmap gxl_hdmi_pll = {
.offset = HHI_HDMI_PLL_CNTL + 8,
.shift = 19,
.width = 2,
- .flags = CLK_DIVIDER_POWER_OF_TWO,
+ .table = gxl_hdmi_pll_od_div_table,
},
.hw.init = &(struct clk_init_data){
.name = "hdmi_pll",
diff --git a/drivers/clk/microchip/clk-core.c b/drivers/clk/microchip/clk-core.c
index 1b4f023cdc8be..71fbaf8318f22 100644
--- a/drivers/clk/microchip/clk-core.c
+++ b/drivers/clk/microchip/clk-core.c
@@ -281,14 +281,13 @@ static u8 roclk_get_parent(struct clk_hw *hw)

v = (readl(refo->ctrl_reg) >> REFO_SEL_SHIFT) & REFO_SEL_MASK;

- if (!refo->parent_map)
- return v;
-
- for (i = 0; i < clk_hw_get_num_parents(hw); i++)
- if (refo->parent_map[i] == v)
- return i;
+ if (refo->parent_map) {
+ for (i = 0; i < clk_hw_get_num_parents(hw); i++)
+ if (refo->parent_map[i] == v)
+ return i;
+ }

- return -EINVAL;
+ return v;
}

static unsigned long roclk_calc_rate(unsigned long parent_rate,
@@ -823,13 +822,13 @@ static u8 sclk_get_parent(struct clk_hw *hw)

v = (readl(sclk->mux_reg) >> OSC_CUR_SHIFT) & OSC_CUR_MASK;

- if (!sclk->parent_map)
- return v;
+ if (sclk->parent_map) {
+ for (i = 0; i < clk_hw_get_num_parents(hw); i++)
+ if (sclk->parent_map[i] == v)
+ return i;
+ }

- for (i = 0; i < clk_hw_get_num_parents(hw); i++)
- if (sclk->parent_map[i] == v)
- return i;
- return -EINVAL;
+ return v;
}

static int sclk_set_parent(struct clk_hw *hw, u8 index)
diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
index bf6406f5279a4..6aa9dcdabffde 100644
--- a/drivers/clk/qcom/clk-rcg2.c
+++ b/drivers/clk/qcom/clk-rcg2.c
@@ -587,7 +587,7 @@ static int clk_rcg2_get_duty_cycle(struct clk_hw *hw, struct clk_duty *duty)
static int clk_rcg2_set_duty_cycle(struct clk_hw *hw, struct clk_duty *duty)
{
struct clk_rcg2 *rcg = to_clk_rcg2(hw);
- u32 notn_m, n, m, d, not2d, mask, duty_per, cfg;
+ u32 notn_m, n, m, d, not2d, mask, cfg;
int ret;

/* Duty-cycle cannot be modified for non-MND RCGs */
@@ -606,10 +606,8 @@ static int clk_rcg2_set_duty_cycle(struct clk_hw *hw, struct clk_duty *duty)

n = (~(notn_m) + m) & mask;

- duty_per = (duty->num * 100) / duty->den;
-
/* Calculate 2d value */
- d = DIV_ROUND_CLOSEST(n * duty_per * 2, 100);
+ d = DIV_ROUND_CLOSEST(n * duty->num * 2, duty->den);

/*
* Check bit widths of 2d. If D is too big reduce duty cycle.
@@ -1086,6 +1084,7 @@ static int clk_gfx3d_determine_rate(struct clk_hw *hw,
if (req->max_rate < parent_req.max_rate)
parent_req.max_rate = req->max_rate;

+ parent_req.best_parent_hw = req->best_parent_hw;
ret = __clk_determine_rate(req->best_parent_hw, &parent_req);
if (ret)
return ret;
diff --git a/drivers/clk/qcom/common.c b/drivers/clk/qcom/common.c
index cab2dbb7a8d49..e1a83ec481509 100644
--- a/drivers/clk/qcom/common.c
+++ b/drivers/clk/qcom/common.c
@@ -375,7 +375,7 @@ int qcom_cc_probe_by_index(struct platform_device *pdev, int index,

base = devm_platform_ioremap_resource(pdev, index);
if (IS_ERR(base))
- return -ENOMEM;
+ return PTR_ERR(base);

regmap = devm_regmap_init_mmio(&pdev->dev, base, desc->config);
if (IS_ERR(regmap))
diff --git a/drivers/clk/qcom/dispcc-sdm845.c b/drivers/clk/qcom/dispcc-sdm845.c
index e6139e8f74dc0..1859c093a241b 100644
--- a/drivers/clk/qcom/dispcc-sdm845.c
+++ b/drivers/clk/qcom/dispcc-sdm845.c
@@ -280,7 +280,7 @@ static struct clk_rcg2 disp_cc_mdss_pclk0_clk_src = {
.name = "disp_cc_mdss_pclk0_clk_src",
.parent_data = disp_cc_parent_data_4,
.num_parents = ARRAY_SIZE(disp_cc_parent_data_4),
- .flags = CLK_SET_RATE_PARENT,
+ .flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
.ops = &clk_pixel_ops,
},
};
@@ -295,7 +295,7 @@ static struct clk_rcg2 disp_cc_mdss_pclk1_clk_src = {
.name = "disp_cc_mdss_pclk1_clk_src",
.parent_data = disp_cc_parent_data_4,
.num_parents = ARRAY_SIZE(disp_cc_parent_data_4),
- .flags = CLK_SET_RATE_PARENT,
+ .flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
.ops = &clk_pixel_ops,
},
};
diff --git a/drivers/clk/qcom/dispcc-sm7150.c b/drivers/clk/qcom/dispcc-sm7150.c
index 1e2a98a63511d..3cd2af842143c 100644
--- a/drivers/clk/qcom/dispcc-sm7150.c
+++ b/drivers/clk/qcom/dispcc-sm7150.c
@@ -371,7 +371,7 @@ static struct clk_rcg2 dispcc_mdss_pclk1_clk_src = {
.name = "dispcc_mdss_pclk1_clk_src",
.parent_data = dispcc_parent_data_4,
.num_parents = ARRAY_SIZE(dispcc_parent_data_4),
- .flags = CLK_SET_RATE_PARENT,
+ .flags = CLK_SET_RATE_PARENT | CLK_OPS_PARENT_ENABLE,
.ops = &clk_pixel_ops,
},
};
diff --git a/drivers/clk/qcom/gcc-ipq5018.c b/drivers/clk/qcom/gcc-ipq5018.c
index 24eb4c40da634..ee45ba9f13e06 100644
--- a/drivers/clk/qcom/gcc-ipq5018.c
+++ b/drivers/clk/qcom/gcc-ipq5018.c
@@ -1340,6 +1340,7 @@ static struct clk_branch gcc_sleep_clk_src = {
.name = "gcc_sleep_clk_src",
.parent_data = gcc_sleep_clk_data,
.num_parents = ARRAY_SIZE(gcc_sleep_clk_data),
+ .flags = CLK_IS_CRITICAL,
.ops = &clk_branch2_ops,
},
},
diff --git a/drivers/clk/qcom/gcc-msm8917.c b/drivers/clk/qcom/gcc-msm8917.c
index 3e2a2ae2ee6e9..cb5e18b084296 100644
--- a/drivers/clk/qcom/gcc-msm8917.c
+++ b/drivers/clk/qcom/gcc-msm8917.c
@@ -3034,7 +3034,6 @@ static struct gdsc cpp_gdsc = {
.pd = {
.name = "cpp_gdsc",
},
- .flags = ALWAYS_ON,
.pwrsts = PWRSTS_OFF_ON,
};

diff --git a/drivers/clk/qcom/gcc-msm8953.c b/drivers/clk/qcom/gcc-msm8953.c
index 8f29ecc74c50b..8fe1d3e421440 100644
--- a/drivers/clk/qcom/gcc-msm8953.c
+++ b/drivers/clk/qcom/gcc-msm8953.c
@@ -3946,7 +3946,6 @@ static struct gdsc cpp_gdsc = {
.pd = {
.name = "cpp_gdsc",
},
- .flags = ALWAYS_ON,
.pwrsts = PWRSTS_OFF_ON,
};

diff --git a/drivers/clk/qcom/gcc-qdu1000.c b/drivers/clk/qcom/gcc-qdu1000.c
index dbe9e9437939a..915bb9b4ff813 100644
--- a/drivers/clk/qcom/gcc-qdu1000.c
+++ b/drivers/clk/qcom/gcc-qdu1000.c
@@ -904,7 +904,7 @@ static struct clk_rcg2 gcc_sdcc5_apps_clk_src = {
.name = "gcc_sdcc5_apps_clk_src",
.parent_data = gcc_parent_data_8,
.num_parents = ARRAY_SIZE(gcc_parent_data_8),
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

@@ -923,7 +923,7 @@ static struct clk_rcg2 gcc_sdcc5_ice_core_clk_src = {
.name = "gcc_sdcc5_ice_core_clk_src",
.parent_data = gcc_parent_data_2,
.num_parents = ARRAY_SIZE(gcc_parent_data_2),
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

diff --git a/drivers/clk/qcom/gcc-sdx75.c b/drivers/clk/qcom/gcc-sdx75.c
index 453a6bf8e8786..1f3cd58483a2d 100644
--- a/drivers/clk/qcom/gcc-sdx75.c
+++ b/drivers/clk/qcom/gcc-sdx75.c
@@ -1033,7 +1033,7 @@ static struct clk_rcg2 gcc_sdcc1_apps_clk_src = {
.name = "gcc_sdcc1_apps_clk_src",
.parent_data = gcc_parent_data_17,
.num_parents = ARRAY_SIZE(gcc_parent_data_17),
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

@@ -1057,7 +1057,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
.name = "gcc_sdcc2_apps_clk_src",
.parent_data = gcc_parent_data_18,
.num_parents = ARRAY_SIZE(gcc_parent_data_18),
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

diff --git a/drivers/clk/qcom/gcc-sm4450.c b/drivers/clk/qcom/gcc-sm4450.c
index e2d9e4691c5b7..023d840e9f4ef 100644
--- a/drivers/clk/qcom/gcc-sm4450.c
+++ b/drivers/clk/qcom/gcc-sm4450.c
@@ -769,7 +769,7 @@ static struct clk_rcg2 gcc_sdcc1_apps_clk_src = {
.parent_data = gcc_parent_data_4,
.num_parents = ARRAY_SIZE(gcc_parent_data_4),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

@@ -791,7 +791,7 @@ static struct clk_rcg2 gcc_sdcc1_ice_core_clk_src = {
.parent_data = gcc_parent_data_4,
.num_parents = ARRAY_SIZE(gcc_parent_data_4),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

@@ -815,7 +815,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
.parent_data = gcc_parent_data_6,
.num_parents = ARRAY_SIZE(gcc_parent_data_6),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

diff --git a/drivers/clk/qcom/gcc-sm8450.c b/drivers/clk/qcom/gcc-sm8450.c
index c445c271678a5..ff62381ecdd97 100644
--- a/drivers/clk/qcom/gcc-sm8450.c
+++ b/drivers/clk/qcom/gcc-sm8450.c
@@ -936,7 +936,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
.parent_data = gcc_parent_data_7,
.num_parents = ARRAY_SIZE(gcc_parent_data_7),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

@@ -959,7 +959,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
.parent_data = gcc_parent_data_0,
.num_parents = ARRAY_SIZE(gcc_parent_data_0),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

diff --git a/drivers/clk/qcom/gcc-sm8550.c b/drivers/clk/qcom/gcc-sm8550.c
index 862a9bf73bcb5..36a5b7de5b55d 100644
--- a/drivers/clk/qcom/gcc-sm8550.c
+++ b/drivers/clk/qcom/gcc-sm8550.c
@@ -1025,7 +1025,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
.parent_data = gcc_parent_data_9,
.num_parents = ARRAY_SIZE(gcc_parent_data_9),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_shared_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

@@ -1048,7 +1048,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
.parent_data = gcc_parent_data_0,
.num_parents = ARRAY_SIZE(gcc_parent_data_0),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_shared_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

diff --git a/drivers/clk/qcom/gcc-sm8650.c b/drivers/clk/qcom/gcc-sm8650.c
index fa1672c4e7d81..8c4b86494183c 100644
--- a/drivers/clk/qcom/gcc-sm8650.c
+++ b/drivers/clk/qcom/gcc-sm8650.c
@@ -1257,7 +1257,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
.parent_data = gcc_parent_data_11,
.num_parents = ARRAY_SIZE(gcc_parent_data_11),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_shared_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

@@ -1279,7 +1279,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
.parent_data = gcc_parent_data_0,
.num_parents = ARRAY_SIZE(gcc_parent_data_0),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_shared_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

diff --git a/drivers/clk/qcom/gcc-x1e80100.c b/drivers/clk/qcom/gcc-x1e80100.c
index 86cc8ecf16a48..0c49f0461ae32 100644
--- a/drivers/clk/qcom/gcc-x1e80100.c
+++ b/drivers/clk/qcom/gcc-x1e80100.c
@@ -1516,7 +1516,7 @@ static struct clk_rcg2 gcc_sdcc2_apps_clk_src = {
.parent_data = gcc_parent_data_9,
.num_parents = ARRAY_SIZE(gcc_parent_data_9),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

@@ -1538,7 +1538,7 @@ static struct clk_rcg2 gcc_sdcc4_apps_clk_src = {
.parent_data = gcc_parent_data_0,
.num_parents = ARRAY_SIZE(gcc_parent_data_0),
.flags = CLK_SET_RATE_PARENT,
- .ops = &clk_rcg2_floor_ops,
+ .ops = &clk_rcg2_shared_floor_ops,
},
};

diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
index e4f2d974f38a1..214a9e89379ae 100644
--- a/drivers/clk/renesas/rzg2l-cpg.c
+++ b/drivers/clk/renesas/rzg2l-cpg.c
@@ -116,8 +116,8 @@ struct div_hw_data {

struct rzg2l_pll5_param {
u32 pl5_fracin;
+ u16 pl5_intin;
u8 pl5_refdiv;
- u8 pl5_intin;
u8 pl5_postdiv1;
u8 pl5_postdiv2;
u8 pl5_spread;
@@ -563,8 +563,8 @@ rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params,
foutvco_rate = div_u64(mul_u32_u32(EXTAL_FREQ_IN_MEGA_HZ * MEGA,
(params->pl5_intin << 24) + params->pl5_fracin),
params->pl5_refdiv) >> 24;
- foutpostdiv_rate = DIV_ROUND_CLOSEST_ULL(foutvco_rate,
- params->pl5_postdiv1 * params->pl5_postdiv2);
+ foutpostdiv_rate = DIV_ROUND_CLOSEST(foutvco_rate,
+ params->pl5_postdiv1 * params->pl5_postdiv2);

return foutpostdiv_rate;
}
diff --git a/drivers/clk/tegra/clk-tegra124-emc.c b/drivers/clk/tegra/clk-tegra124-emc.c
index 2a6db04342815..0f6fb776b2298 100644
--- a/drivers/clk/tegra/clk-tegra124-emc.c
+++ b/drivers/clk/tegra/clk-tegra124-emc.c
@@ -538,8 +538,10 @@ struct clk *tegra124_clk_register_emc(void __iomem *base, struct device_node *np
tegra->hw.init = &init;

clk = clk_register(NULL, &tegra->hw);
- if (IS_ERR(clk))
+ if (IS_ERR(clk)) {
+ kfree(tegra);
return clk;
+ }

tegra->prev_parent = clk_hw_get_parent_by_index(
&tegra->hw, emc_get_parent(&tegra->hw))->clk;
diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
index d546903dba4f3..abfc2aa857678 100644
--- a/drivers/clocksource/Kconfig
+++ b/drivers/clocksource/Kconfig
@@ -246,6 +246,7 @@ config KEYSTONE_TIMER

config INTEGRATOR_AP_TIMER
bool "Integrator-AP timer driver" if COMPILE_TEST
+ depends on OF
select CLKSRC_MMIO
help
Enables support for the Integrator-AP timer.
diff --git a/drivers/clocksource/sh_tmu.c b/drivers/clocksource/sh_tmu.c
index beffff81c00f3..3fc6ed9b56300 100644
--- a/drivers/clocksource/sh_tmu.c
+++ b/drivers/clocksource/sh_tmu.c
@@ -143,16 +143,6 @@ static void sh_tmu_start_stop_ch(struct sh_tmu_channel *ch, int start)

static int __sh_tmu_enable(struct sh_tmu_channel *ch)
{
- int ret;
-
- /* enable clock */
- ret = clk_enable(ch->tmu->clk);
- if (ret) {
- dev_err(&ch->tmu->pdev->dev, "ch%u: cannot enable clock\n",
- ch->index);
- return ret;
- }
-
/* make sure channel is disabled */
sh_tmu_start_stop_ch(ch, 0);

@@ -174,7 +164,6 @@ static int sh_tmu_enable(struct sh_tmu_channel *ch)
if (ch->enable_count++ > 0)
return 0;

- pm_runtime_get_sync(&ch->tmu->pdev->dev);
dev_pm_syscore_device(&ch->tmu->pdev->dev, true);

return __sh_tmu_enable(ch);
@@ -187,9 +176,6 @@ static void __sh_tmu_disable(struct sh_tmu_channel *ch)

/* disable interrupts in TMU block */
sh_tmu_write(ch, TCR, TCR_TPSC_CLK4);
-
- /* stop clock */
- clk_disable(ch->tmu->clk);
}

static void sh_tmu_disable(struct sh_tmu_channel *ch)
@@ -203,7 +189,6 @@ static void sh_tmu_disable(struct sh_tmu_channel *ch)
__sh_tmu_disable(ch);

dev_pm_syscore_device(&ch->tmu->pdev->dev, false);
- pm_runtime_put(&ch->tmu->pdev->dev);
}

static void sh_tmu_set_next(struct sh_tmu_channel *ch, unsigned long delta,
@@ -552,7 +537,6 @@ static int sh_tmu_setup(struct sh_tmu_device *tmu, struct platform_device *pdev)
goto err_clk_unprepare;

tmu->rate = clk_get_rate(tmu->clk) / 4;
- clk_disable(tmu->clk);

/* Map the memory resource. */
ret = sh_tmu_map_memory(tmu);
@@ -626,8 +610,6 @@ static int sh_tmu_probe(struct platform_device *pdev)
out:
if (tmu->has_clockevent || tmu->has_clocksource)
pm_runtime_irq_safe(&pdev->dev);
- else
- pm_runtime_idle(&pdev->dev);

return 0;
}
diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
index dbd73cd0cf535..9e86a0e8b9e86 100644
--- a/drivers/cpufreq/cpufreq-dt-platdev.c
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -164,8 +164,11 @@ static const struct of_device_id blocklist[] __initconst = {
{ .compatible = "qcom,sdm845", },
{ .compatible = "qcom,sdx75", },
{ .compatible = "qcom,sm6115", },
+ { .compatible = "qcom,sm6125", },
+ { .compatible = "qcom,sm6150", },
{ .compatible = "qcom,sm6350", },
{ .compatible = "qcom,sm6375", },
+ { .compatible = "qcom,sm7125", },
{ .compatible = "qcom,sm7225", },
{ .compatible = "qcom,sm7325", },
{ .compatible = "qcom,sm8150", },
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index 9d8cb44c26c70..f8f9ff2b73ea0 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -1058,7 +1058,7 @@ static void hybrid_init_cpu_capacity_scaling(bool refresh)
* the capacity of SMT threads is not deterministic even approximately,
* do not do that when SMT is in use.
*/
- if (hwp_is_hybrid && !sched_smt_active() && arch_enable_hybrid_capacity_scale()) {
+ if (hwp_is_hybrid && !cpu_smt_possible() && arch_enable_hybrid_capacity_scale()) {
hybrid_refresh_cpu_capacity_scaling();
/*
* Disabling ITMT causes sched domains to be rebuilt to disable asym
diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
index bb265541671a5..f6aee1f28ab88 100644
--- a/drivers/cpufreq/scmi-cpufreq.c
+++ b/drivers/cpufreq/scmi-cpufreq.c
@@ -98,6 +98,7 @@ static int scmi_cpu_domain_id(struct device *cpu_dev)
return -EINVAL;
}

+ of_node_put(domain_id.np);
return domain_id.args[0];
}

diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index 0e1bbc966135d..2cb11e5a11251 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -353,6 +353,16 @@ noinstr int cpuidle_enter_state(struct cpuidle_device *dev,
int cpuidle_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
bool *stop_tick)
{
+ /*
+ * If there is only a single idle state (or none), there is nothing
+ * meaningful for the governor to choose. Skip the governor and
+ * always use state 0 with the tick running.
+ */
+ if (drv->state_count <= 1) {
+ *stop_tick = false;
+ return 0;
+ }
+
return cpuidle_curr_governor->select(drv, dev, stop_tick);
}

diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c
index 3be761961f1be..0ce7323450011 100644
--- a/drivers/cpuidle/governors/menu.c
+++ b/drivers/cpuidle/governors/menu.c
@@ -245,7 +245,7 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,

/* Find the shortest expected idle interval. */
predicted_ns = get_typical_interval(data) * NSEC_PER_USEC;
- if (predicted_ns > RESIDENCY_THRESHOLD_NS) {
+ if (predicted_ns > RESIDENCY_THRESHOLD_NS || tick_nohz_tick_stopped()) {
unsigned int timer_us;

/* Determine the time till the closest timer. */
@@ -265,6 +265,16 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
RESOLUTION * DECAY * NSEC_PER_USEC);
/* Use the lowest expected idle interval to pick the idle state. */
predicted_ns = min((u64)timer_us * NSEC_PER_USEC, predicted_ns);
+ /*
+ * If the tick is already stopped, the cost of possible short
+ * idle duration misprediction is much higher, because the CPU
+ * may be stuck in a shallow idle state for a long time as a
+ * result of it. In that case, say we might mispredict and use
+ * the known time till the closest timer event for the idle
+ * state selection.
+ */
+ if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
+ predicted_ns = data->next_timer_ns;
} else {
/*
* Because the next timer event is not going to be determined
@@ -290,16 +300,6 @@ static int menu_select(struct cpuidle_driver *drv, struct cpuidle_device *dev,
return 0;
}

- /*
- * If the tick is already stopped, the cost of possible short idle
- * duration misprediction is much higher, because the CPU may be stuck
- * in a shallow idle state for a long time as a result of it. In that
- * case, say we might mispredict and use the known time till the closest
- * timer event for the idle state selection.
- */
- if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)
- predicted_ns = data->next_timer_ns;
-
/*
* Find the idle state with the lowest power while satisfying
* our constraints.
diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
index e809d030ab113..ece9f1e5a689f 100644
--- a/drivers/crypto/caam/caamalg_qi2.c
+++ b/drivers/crypto/caam/caamalg_qi2.c
@@ -4813,7 +4813,8 @@ static void dpaa2_dpseci_free(struct dpaa2_caam_priv *priv)
{
struct device *dev = priv->dev;
struct fsl_mc_device *ls_dev = to_fsl_mc_device(dev);
- int err;
+ struct dpaa2_caam_priv_per_cpu *ppriv;
+ int i, err;

if (DPSECI_VER(priv->major_ver, priv->minor_ver) > DPSECI_VER(5, 3)) {
err = dpseci_reset(priv->mc_io, 0, ls_dev->mc_handle);
@@ -4821,6 +4822,12 @@ static void dpaa2_dpseci_free(struct dpaa2_caam_priv *priv)
dev_err(dev, "dpseci_reset() failed\n");
}

+ for_each_cpu(i, priv->clean_mask) {
+ ppriv = per_cpu_ptr(priv->ppriv, i);
+ free_netdev(ppriv->net_dev);
+ }
+ free_cpumask_var(priv->clean_mask);
+
dpaa2_dpseci_congestion_free(priv);
dpseci_close(priv->mc_io, 0, ls_dev->mc_handle);
}
@@ -5006,16 +5013,15 @@ static int __cold dpaa2_dpseci_setup(struct fsl_mc_device *ls_dev)
struct device *dev = &ls_dev->dev;
struct dpaa2_caam_priv *priv;
struct dpaa2_caam_priv_per_cpu *ppriv;
- cpumask_var_t clean_mask;
int err, cpu;
u8 i;

err = -ENOMEM;
- if (!zalloc_cpumask_var(&clean_mask, GFP_KERNEL))
- goto err_cpumask;
-
priv = dev_get_drvdata(dev);

+ if (!zalloc_cpumask_var(&priv->clean_mask, GFP_KERNEL))
+ goto err_cpumask;
+
priv->dev = dev;
priv->dpsec_id = ls_dev->obj_desc.id;

@@ -5117,7 +5123,7 @@ static int __cold dpaa2_dpseci_setup(struct fsl_mc_device *ls_dev)
err = -ENOMEM;
goto err_alloc_netdev;
}
- cpumask_set_cpu(cpu, clean_mask);
+ cpumask_set_cpu(cpu, priv->clean_mask);
ppriv->net_dev->dev = *dev;

netif_napi_add_tx_weight(ppriv->net_dev, &ppriv->napi,
@@ -5125,18 +5131,16 @@ static int __cold dpaa2_dpseci_setup(struct fsl_mc_device *ls_dev)
DPAA2_CAAM_NAPI_WEIGHT);
}

- err = 0;
- goto free_cpumask;
+ return 0;

err_alloc_netdev:
- free_dpaa2_pcpu_netdev(priv, clean_mask);
+ free_dpaa2_pcpu_netdev(priv, priv->clean_mask);
err_get_rx_queue:
dpaa2_dpseci_congestion_free(priv);
err_get_vers:
dpseci_close(priv->mc_io, 0, ls_dev->mc_handle);
err_open:
-free_cpumask:
- free_cpumask_var(clean_mask);
+ free_cpumask_var(priv->clean_mask);
err_cpumask:
return err;
}
@@ -5181,7 +5185,6 @@ static int __cold dpaa2_dpseci_disable(struct dpaa2_caam_priv *priv)
ppriv = per_cpu_ptr(priv->ppriv, i);
napi_disable(&ppriv->napi);
netif_napi_del(&ppriv->napi);
- free_netdev(ppriv->net_dev);
}

return 0;
diff --git a/drivers/crypto/caam/caamalg_qi2.h b/drivers/crypto/caam/caamalg_qi2.h
index 61d1219a202fc..8e65b4b28c7ba 100644
--- a/drivers/crypto/caam/caamalg_qi2.h
+++ b/drivers/crypto/caam/caamalg_qi2.h
@@ -42,6 +42,7 @@
* @mc_io: pointer to MC portal's I/O object
* @domain: IOMMU domain
* @ppriv: per CPU pointers to privata data
+ * @clean_mask: CPU mask of CPUs that have allocated netdevs
*/
struct dpaa2_caam_priv {
int dpsec_id;
@@ -65,6 +66,7 @@ struct dpaa2_caam_priv {

struct dpaa2_caam_priv_per_cpu __percpu *ppriv;
struct dentry *dfs_root;
+ cpumask_var_t clean_mask;
};

/**
diff --git a/drivers/crypto/cavium/cpt/cptvf_main.c b/drivers/crypto/cavium/cpt/cptvf_main.c
index c246920e6f540..bccd680c7f7ee 100644
--- a/drivers/crypto/cavium/cpt/cptvf_main.c
+++ b/drivers/crypto/cavium/cpt/cptvf_main.c
@@ -180,7 +180,8 @@ static void free_command_queues(struct cpt_vf *cptvf,

hlist_for_each_entry_safe(chunk, node, &cqinfo->queue[i].chead,
nextchunk) {
- dma_free_coherent(&pdev->dev, chunk->size,
+ dma_free_coherent(&pdev->dev,
+ chunk->size + CPT_NEXT_CHUNK_PTR_SIZE,
chunk->head,
chunk->dma_addr);
chunk->head = NULL;
diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
index 1c5a7189631ec..fa43da7824207 100644
--- a/drivers/crypto/ccp/psp-dev.c
+++ b/drivers/crypto/ccp/psp-dev.c
@@ -331,6 +331,17 @@ struct psp_device *psp_get_master_device(void)
return sp ? sp->psp_data : NULL;
}

+int psp_restore(struct sp_device *sp)
+{
+ struct psp_device *psp = sp->psp_data;
+ int ret = 0;
+
+ if (psp->tee_data)
+ ret = tee_restore(psp);
+
+ return ret;
+}
+
void psp_pci_init(void)
{
psp_master = psp_get_master_device();
diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c
index bc0ecdb5c79e5..3016d1369ac51 100644
--- a/drivers/crypto/ccp/sev-dev.c
+++ b/drivers/crypto/ccp/sev-dev.c
@@ -103,12 +103,7 @@ static size_t sev_es_tmr_size = SEV_TMR_SIZE;
#define NV_LENGTH (32 * 1024)
static void *sev_init_ex_buffer;

-/*
- * SEV_DATA_RANGE_LIST:
- * Array containing range of pages that firmware transitions to HV-fixed
- * page state.
- */
-static struct sev_data_range_list *snp_range_list;
+static void __sev_firmware_shutdown(struct sev_device *sev, bool panic);

static inline bool sev_version_greater_or_equal(u8 maj, u8 min)
{
@@ -1096,6 +1091,7 @@ static int snp_filter_reserved_mem_regions(struct resource *rs, void *arg)

static int __sev_snp_init_locked(int *error)
{
+ struct sev_data_range_list *snp_range_list __free(kfree) = NULL;
struct psp_device *psp = psp_master;
struct sev_data_snp_init_ex data;
struct sev_device *sev;
@@ -1399,6 +1395,37 @@ static int sev_get_platform_state(int *state, int *error)
return rc;
}

+static int sev_move_to_init_state(struct sev_issue_cmd *argp, bool *shutdown_required)
+{
+ struct sev_platform_init_args init_args = {0};
+ int rc;
+
+ rc = _sev_platform_init_locked(&init_args);
+ if (rc) {
+ argp->error = SEV_RET_INVALID_PLATFORM_STATE;
+ return rc;
+ }
+
+ *shutdown_required = true;
+
+ return 0;
+}
+
+static int snp_move_to_init_state(struct sev_issue_cmd *argp, bool *shutdown_required)
+{
+ int error, rc;
+
+ rc = __sev_snp_init_locked(&error);
+ if (rc) {
+ argp->error = SEV_RET_INVALID_PLATFORM_STATE;
+ return rc;
+ }
+
+ *shutdown_required = true;
+
+ return 0;
+}
+
static int sev_ioctl_do_reset(struct sev_issue_cmd *argp, bool writable)
{
int state, rc;
@@ -1451,24 +1478,31 @@ static int sev_ioctl_do_platform_status(struct sev_issue_cmd *argp)
static int sev_ioctl_do_pek_pdh_gen(int cmd, struct sev_issue_cmd *argp, bool writable)
{
struct sev_device *sev = psp_master->sev_data;
+ bool shutdown_required = false;
int rc;

if (!writable)
return -EPERM;

if (sev->state == SEV_STATE_UNINIT) {
- rc = __sev_platform_init_locked(&argp->error);
+ rc = sev_move_to_init_state(argp, &shutdown_required);
if (rc)
return rc;
}

- return __sev_do_cmd_locked(cmd, NULL, &argp->error);
+ rc = __sev_do_cmd_locked(cmd, NULL, &argp->error);
+
+ if (shutdown_required)
+ __sev_firmware_shutdown(sev, false);
+
+ return rc;
}

static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp, bool writable)
{
struct sev_device *sev = psp_master->sev_data;
struct sev_user_data_pek_csr input;
+ bool shutdown_required = false;
struct sev_data_pek_csr data;
void __user *input_address;
void *blob = NULL;
@@ -1500,7 +1534,7 @@ static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp, bool writable)

cmd:
if (sev->state == SEV_STATE_UNINIT) {
- ret = __sev_platform_init_locked(&argp->error);
+ ret = sev_move_to_init_state(argp, &shutdown_required);
if (ret)
goto e_free_blob;
}
@@ -1521,6 +1555,9 @@ static int sev_ioctl_do_pek_csr(struct sev_issue_cmd *argp, bool writable)
}

e_free_blob:
+ if (shutdown_required)
+ __sev_firmware_shutdown(sev, false);
+
kfree(blob);
return ret;
}
@@ -1736,6 +1773,7 @@ static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp, bool writable)
struct sev_device *sev = psp_master->sev_data;
struct sev_user_data_pek_cert_import input;
struct sev_data_pek_cert_import data;
+ bool shutdown_required = false;
void *pek_blob, *oca_blob;
int ret;

@@ -1766,7 +1804,7 @@ static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp, bool writable)

/* If platform is not in INIT state then transition it to INIT */
if (sev->state != SEV_STATE_INIT) {
- ret = __sev_platform_init_locked(&argp->error);
+ ret = sev_move_to_init_state(argp, &shutdown_required);
if (ret)
goto e_free_oca;
}
@@ -1774,6 +1812,9 @@ static int sev_ioctl_do_pek_import(struct sev_issue_cmd *argp, bool writable)
ret = __sev_do_cmd_locked(SEV_CMD_PEK_CERT_IMPORT, &data, &argp->error);

e_free_oca:
+ if (shutdown_required)
+ __sev_firmware_shutdown(sev, false);
+
kfree(oca_blob);
e_free_pek:
kfree(pek_blob);
@@ -1890,18 +1931,9 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
struct sev_data_pdh_cert_export data;
void __user *input_cert_chain_address;
void __user *input_pdh_cert_address;
+ bool shutdown_required = false;
int ret;

- /* If platform is not in INIT state then transition it to INIT. */
- if (sev->state != SEV_STATE_INIT) {
- if (!writable)
- return -EPERM;
-
- ret = __sev_platform_init_locked(&argp->error);
- if (ret)
- return ret;
- }
-
if (copy_from_user(&input, (void __user *)argp->data, sizeof(input)))
return -EFAULT;

@@ -1941,6 +1973,17 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
data.cert_chain_len = input.cert_chain_len;

cmd:
+ /* If platform is not in INIT state then transition it to INIT. */
+ if (sev->state != SEV_STATE_INIT) {
+ if (!writable) {
+ ret = -EPERM;
+ goto e_free_cert;
+ }
+ ret = sev_move_to_init_state(argp, &shutdown_required);
+ if (ret)
+ goto e_free_cert;
+ }
+
ret = __sev_do_cmd_locked(SEV_CMD_PDH_CERT_EXPORT, &data, &argp->error);

/* If we query the length, FW responded with expected data. */
@@ -1967,6 +2010,9 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
}

e_free_cert:
+ if (shutdown_required)
+ __sev_firmware_shutdown(sev, false);
+
kfree(cert_blob);
e_free_pdh:
kfree(pdh_blob);
@@ -1976,12 +2022,13 @@ static int sev_ioctl_do_pdh_export(struct sev_issue_cmd *argp, bool writable)
static int sev_ioctl_do_snp_platform_status(struct sev_issue_cmd *argp)
{
struct sev_device *sev = psp_master->sev_data;
+ bool shutdown_required = false;
struct sev_data_snp_addr buf;
struct page *status_page;
+ int ret, error;
void *data;
- int ret;

- if (!sev->snp_initialized || !argp->data)
+ if (!argp->data)
return -EINVAL;

status_page = alloc_page(GFP_KERNEL_ACCOUNT);
@@ -1990,6 +2037,12 @@ static int sev_ioctl_do_snp_platform_status(struct sev_issue_cmd *argp)

data = page_address(status_page);

+ if (!sev->snp_initialized) {
+ ret = snp_move_to_init_state(argp, &shutdown_required);
+ if (ret)
+ goto cleanup;
+ }
+
/*
* Firmware expects status page to be in firmware-owned state, otherwise
* it will report firmware error code INVALID_PAGE_STATE (0x1A).
@@ -2018,6 +2071,9 @@ static int sev_ioctl_do_snp_platform_status(struct sev_issue_cmd *argp)
ret = -EFAULT;

cleanup:
+ if (shutdown_required)
+ __sev_snp_shutdown_locked(&error, false);
+
__free_pages(status_page, 0);
return ret;
}
@@ -2026,21 +2082,33 @@ static int sev_ioctl_do_snp_commit(struct sev_issue_cmd *argp)
{
struct sev_device *sev = psp_master->sev_data;
struct sev_data_snp_commit buf;
+ bool shutdown_required = false;
+ int ret, error;

- if (!sev->snp_initialized)
- return -EINVAL;
+ if (!sev->snp_initialized) {
+ ret = snp_move_to_init_state(argp, &shutdown_required);
+ if (ret)
+ return ret;
+ }

buf.len = sizeof(buf);

- return __sev_do_cmd_locked(SEV_CMD_SNP_COMMIT, &buf, &argp->error);
+ ret = __sev_do_cmd_locked(SEV_CMD_SNP_COMMIT, &buf, &argp->error);
+
+ if (shutdown_required)
+ __sev_snp_shutdown_locked(&error, false);
+
+ return ret;
}

static int sev_ioctl_do_snp_set_config(struct sev_issue_cmd *argp, bool writable)
{
struct sev_device *sev = psp_master->sev_data;
struct sev_user_data_snp_config config;
+ bool shutdown_required = false;
+ int ret, error;

- if (!sev->snp_initialized || !argp->data)
+ if (!argp->data)
return -EINVAL;

if (!writable)
@@ -2049,17 +2117,29 @@ static int sev_ioctl_do_snp_set_config(struct sev_issue_cmd *argp, bool writable
if (copy_from_user(&config, (void __user *)argp->data, sizeof(config)))
return -EFAULT;

- return __sev_do_cmd_locked(SEV_CMD_SNP_CONFIG, &config, &argp->error);
+ if (!sev->snp_initialized) {
+ ret = snp_move_to_init_state(argp, &shutdown_required);
+ if (ret)
+ return ret;
+ }
+
+ ret = __sev_do_cmd_locked(SEV_CMD_SNP_CONFIG, &config, &argp->error);
+
+ if (shutdown_required)
+ __sev_snp_shutdown_locked(&error, false);
+
+ return ret;
}

static int sev_ioctl_do_snp_vlek_load(struct sev_issue_cmd *argp, bool writable)
{
struct sev_device *sev = psp_master->sev_data;
struct sev_user_data_snp_vlek_load input;
+ bool shutdown_required = false;
+ int ret, error;
void *blob;
- int ret;

- if (!sev->snp_initialized || !argp->data)
+ if (!argp->data)
return -EINVAL;

if (!writable)
@@ -2078,8 +2158,18 @@ static int sev_ioctl_do_snp_vlek_load(struct sev_issue_cmd *argp, bool writable)

input.vlek_wrapped_address = __psp_pa(blob);

+ if (!sev->snp_initialized) {
+ ret = snp_move_to_init_state(argp, &shutdown_required);
+ if (ret)
+ goto cleanup;
+ }
+
ret = __sev_do_cmd_locked(SEV_CMD_SNP_VLEK_LOAD, &input, &argp->error);

+ if (shutdown_required)
+ __sev_snp_shutdown_locked(&error, false);
+
+cleanup:
kfree(blob);

return ret;
@@ -2334,11 +2424,6 @@ static void __sev_firmware_shutdown(struct sev_device *sev, bool panic)
sev_init_ex_buffer = NULL;
}

- if (snp_range_list) {
- kfree(snp_range_list);
- snp_range_list = NULL;
- }
-
__sev_snp_shutdown_locked(&error, panic);
}

diff --git a/drivers/crypto/ccp/sp-dev.c b/drivers/crypto/ccp/sp-dev.c
index 7eb3e46682860..ccbe009ad6e58 100644
--- a/drivers/crypto/ccp/sp-dev.c
+++ b/drivers/crypto/ccp/sp-dev.c
@@ -229,6 +229,18 @@ int sp_resume(struct sp_device *sp)
return 0;
}

+int sp_restore(struct sp_device *sp)
+{
+ if (sp->psp_data) {
+ int ret = psp_restore(sp);
+
+ if (ret)
+ return ret;
+ }
+
+ return sp_resume(sp);
+}
+
struct sp_device *sp_get_psp_master_device(void)
{
struct sp_device *i, *ret = NULL;
diff --git a/drivers/crypto/ccp/sp-dev.h b/drivers/crypto/ccp/sp-dev.h
index 6f9d7063257d7..c8a611ef275b5 100644
--- a/drivers/crypto/ccp/sp-dev.h
+++ b/drivers/crypto/ccp/sp-dev.h
@@ -141,6 +141,7 @@ void sp_destroy(struct sp_device *sp);

int sp_suspend(struct sp_device *sp);
int sp_resume(struct sp_device *sp);
+int sp_restore(struct sp_device *sp);
int sp_request_ccp_irq(struct sp_device *sp, irq_handler_t handler,
const char *name, void *data);
void sp_free_ccp_irq(struct sp_device *sp, void *data);
@@ -174,6 +175,7 @@ int psp_dev_init(struct sp_device *sp);
void psp_pci_init(void);
void psp_dev_destroy(struct sp_device *sp);
void psp_pci_exit(void);
+int psp_restore(struct sp_device *sp);

#else /* !CONFIG_CRYPTO_DEV_SP_PSP */

@@ -181,6 +183,7 @@ static inline int psp_dev_init(struct sp_device *sp) { return 0; }
static inline void psp_pci_init(void) { }
static inline void psp_dev_destroy(struct sp_device *sp) { }
static inline void psp_pci_exit(void) { }
+static inline int psp_restore(struct sp_device *sp) { return 0; }

#endif /* CONFIG_CRYPTO_DEV_SP_PSP */

diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
index 224edaaa737b6..e8f7fcf66cc72 100644
--- a/drivers/crypto/ccp/sp-pci.c
+++ b/drivers/crypto/ccp/sp-pci.c
@@ -353,6 +353,13 @@ static int __maybe_unused sp_pci_resume(struct device *dev)
return sp_resume(sp);
}

+static int __maybe_unused sp_pci_restore(struct device *dev)
+{
+ struct sp_device *sp = dev_get_drvdata(dev);
+
+ return sp_restore(sp);
+}
+
#ifdef CONFIG_CRYPTO_DEV_SP_PSP
static const struct sev_vdata sevv1 = {
.cmdresp_reg = 0x10580, /* C2PMSG_32 */
@@ -541,7 +548,14 @@ static const struct pci_device_id sp_pci_table[] = {
};
MODULE_DEVICE_TABLE(pci, sp_pci_table);

-static SIMPLE_DEV_PM_OPS(sp_pci_pm_ops, sp_pci_suspend, sp_pci_resume);
+static const struct dev_pm_ops sp_pci_pm_ops = {
+ .suspend = pm_sleep_ptr(sp_pci_suspend),
+ .resume = pm_sleep_ptr(sp_pci_resume),
+ .freeze = pm_sleep_ptr(sp_pci_suspend),
+ .thaw = pm_sleep_ptr(sp_pci_resume),
+ .poweroff = pm_sleep_ptr(sp_pci_suspend),
+ .restore_early = pm_sleep_ptr(sp_pci_restore),
+};

static struct pci_driver sp_pci_driver = {
.name = "ccp",
diff --git a/drivers/crypto/ccp/tee-dev.c b/drivers/crypto/ccp/tee-dev.c
index 5e1d80724678d..92ffa412622a2 100644
--- a/drivers/crypto/ccp/tee-dev.c
+++ b/drivers/crypto/ccp/tee-dev.c
@@ -86,10 +86,34 @@ static inline void tee_free_cmd_buffer(struct tee_init_ring_cmd *cmd)
kfree(cmd);
}

+static bool tee_send_destroy_cmd(struct psp_tee_device *tee)
+{
+ unsigned int reg;
+ int ret;
+
+ ret = psp_mailbox_command(tee->psp, PSP_CMD_TEE_RING_DESTROY, NULL,
+ TEE_DEFAULT_CMD_TIMEOUT, &reg);
+ if (ret) {
+ dev_err(tee->dev, "tee: ring destroy command timed out, disabling TEE support\n");
+ psp_dead = true;
+ return false;
+ }
+
+ if (FIELD_GET(PSP_CMDRESP_STS, reg)) {
+ dev_err(tee->dev, "tee: ring destroy command failed (%#010lx)\n",
+ FIELD_GET(PSP_CMDRESP_STS, reg));
+ psp_dead = true;
+ return false;
+ }
+
+ return true;
+}
+
static int tee_init_ring(struct psp_tee_device *tee)
{
int ring_size = MAX_RING_BUFFER_ENTRIES * sizeof(struct tee_ring_cmd);
struct tee_init_ring_cmd *cmd;
+ bool retry = false;
unsigned int reg;
int ret;

@@ -112,6 +136,7 @@ static int tee_init_ring(struct psp_tee_device *tee)
/* Send command buffer details to Trusted OS by writing to
* CPU-PSP message registers
*/
+retry_init:
ret = psp_mailbox_command(tee->psp, PSP_CMD_TEE_RING_INIT, cmd,
TEE_DEFAULT_CMD_TIMEOUT, &reg);
if (ret) {
@@ -122,9 +147,22 @@ static int tee_init_ring(struct psp_tee_device *tee)
}

if (FIELD_GET(PSP_CMDRESP_STS, reg)) {
+ /*
+ * During the hibernate resume sequence driver may have gotten loaded
+ * but the ring not properly destroyed. If the ring doesn't work, try
+ * to destroy and re-init once.
+ */
+ if (!retry && FIELD_GET(PSP_CMDRESP_STS, reg) == PSP_TEE_STS_RING_BUSY) {
+ dev_info(tee->dev, "tee: ring init command failed with busy status, retrying\n");
+ if (tee_send_destroy_cmd(tee)) {
+ retry = true;
+ goto retry_init;
+ }
+ }
dev_err(tee->dev, "tee: ring init command failed (%#010lx)\n",
FIELD_GET(PSP_CMDRESP_STS, reg));
tee_free_ring(tee);
+ psp_dead = true;
ret = -EIO;
}

@@ -136,24 +174,13 @@ static int tee_init_ring(struct psp_tee_device *tee)

static void tee_destroy_ring(struct psp_tee_device *tee)
{
- unsigned int reg;
- int ret;
-
if (!tee->rb_mgr.ring_start)
return;

if (psp_dead)
goto free_ring;

- ret = psp_mailbox_command(tee->psp, PSP_CMD_TEE_RING_DESTROY, NULL,
- TEE_DEFAULT_CMD_TIMEOUT, &reg);
- if (ret) {
- dev_err(tee->dev, "tee: ring destroy command timed out, disabling TEE support\n");
- psp_dead = true;
- } else if (FIELD_GET(PSP_CMDRESP_STS, reg)) {
- dev_err(tee->dev, "tee: ring destroy command failed (%#010lx)\n",
- FIELD_GET(PSP_CMDRESP_STS, reg));
- }
+ tee_send_destroy_cmd(tee);

free_ring:
tee_free_ring(tee);
@@ -365,3 +392,8 @@ int psp_check_tee_status(void)
return 0;
}
EXPORT_SYMBOL(psp_check_tee_status);
+
+int tee_restore(struct psp_device *psp)
+{
+ return tee_init_ring(psp->tee_data);
+}
diff --git a/drivers/crypto/ccp/tee-dev.h b/drivers/crypto/ccp/tee-dev.h
index ea9a2b7c05f57..c23416cb7bb37 100644
--- a/drivers/crypto/ccp/tee-dev.h
+++ b/drivers/crypto/ccp/tee-dev.h
@@ -111,5 +111,6 @@ struct tee_ring_cmd {

int tee_dev_init(struct psp_device *psp);
void tee_dev_destroy(struct psp_device *psp);
+int tee_restore(struct psp_device *psp);

#endif /* __TEE_DEV_H__ */
diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
index d0b154d13f445..2c64bf4cdf0dc 100644
--- a/drivers/crypto/hisilicon/qm.c
+++ b/drivers/crypto/hisilicon/qm.c
@@ -551,9 +551,13 @@ static void qm_mb_write(struct hisi_qm *qm, const void *src)
}

#if IS_ENABLED(CONFIG_ARM64)
+ /*
+ * The dmb oshst instruction ensures that the data in the
+ * mailbox is written before it is sent to the hardware.
+ */
asm volatile("ldp %0, %1, %3\n"
- "stp %0, %1, %2\n"
"dmb oshst\n"
+ "stp %0, %1, %2\n"
: "=&r" (tmp0),
"=&r" (tmp1),
"+Q" (*((char __iomem *)fun_base))
diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
index 8605cb3cae92c..cdd485fcbc5bb 100644
--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
+++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
@@ -591,10 +591,8 @@ static int sec_ctx_base_init(struct sec_ctx *ctx)
int i, ret;

ctx->qps = sec_create_qps();
- if (!ctx->qps) {
- pr_err("Can not create sec qps!\n");
+ if (!ctx->qps)
return -ENODEV;
- }

sec = container_of(ctx->qps[0]->qm, struct sec_dev, qm);
ctx->sec = sec;
@@ -633,6 +631,9 @@ static void sec_ctx_base_uninit(struct sec_ctx *ctx)
{
int i;

+ if (!ctx->qps)
+ return;
+
for (i = 0; i < ctx->sec->ctx_q_num; i++)
sec_release_qp_ctx(ctx, &ctx->qp_ctx[i]);

@@ -644,6 +645,9 @@ static int sec_cipher_init(struct sec_ctx *ctx)
{
struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;

+ if (!ctx->qps)
+ return 0;
+
c_ctx->c_key = dma_alloc_coherent(ctx->dev, SEC_MAX_KEY_SIZE,
&c_ctx->c_key_dma, GFP_KERNEL);
if (!c_ctx->c_key)
@@ -656,6 +660,9 @@ static void sec_cipher_uninit(struct sec_ctx *ctx)
{
struct sec_cipher_ctx *c_ctx = &ctx->c_ctx;

+ if (!ctx->qps)
+ return;
+
memzero_explicit(c_ctx->c_key, SEC_MAX_KEY_SIZE);
dma_free_coherent(ctx->dev, SEC_MAX_KEY_SIZE,
c_ctx->c_key, c_ctx->c_key_dma);
@@ -677,6 +684,9 @@ static void sec_auth_uninit(struct sec_ctx *ctx)
{
struct sec_auth_ctx *a_ctx = &ctx->a_ctx;

+ if (!ctx->qps)
+ return;
+
memzero_explicit(a_ctx->a_key, SEC_MAX_AKEY_SIZE);
dma_free_coherent(ctx->dev, SEC_MAX_AKEY_SIZE,
a_ctx->a_key, a_ctx->a_key_dma);
@@ -714,7 +724,7 @@ static int sec_skcipher_init(struct crypto_skcipher *tfm)
}

ret = sec_ctx_base_init(ctx);
- if (ret)
+ if (ret && ret != -ENODEV)
return ret;

ret = sec_cipher_init(ctx);
@@ -823,6 +833,9 @@ static int sec_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
struct device *dev = ctx->dev;
int ret;

+ if (!ctx->qps)
+ goto set_soft_key;
+
if (c_mode == SEC_CMODE_XTS) {
ret = xts_verify_key(tfm, key, keylen);
if (ret) {
@@ -853,13 +866,14 @@ static int sec_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
}

memcpy(c_ctx->c_key, key, keylen);
- if (c_ctx->fbtfm) {
- ret = crypto_sync_skcipher_setkey(c_ctx->fbtfm, key, keylen);
- if (ret) {
- dev_err(dev, "failed to set fallback skcipher key!\n");
- return ret;
- }
+
+set_soft_key:
+ ret = crypto_sync_skcipher_setkey(c_ctx->fbtfm, key, keylen);
+ if (ret) {
+ dev_err(dev, "failed to set fallback skcipher key!\n");
+ return ret;
}
+
return 0;
}

@@ -1135,6 +1149,9 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
struct crypto_authenc_keys keys;
int ret;

+ if (!ctx->qps)
+ return sec_aead_fallback_setkey(a_ctx, tfm, key, keylen);
+
ctx->a_ctx.a_alg = a_alg;
ctx->c_ctx.c_alg = c_alg;
c_ctx->c_mode = c_mode;
@@ -1829,6 +1846,9 @@ static int sec_skcipher_ctx_init(struct crypto_skcipher *tfm)
if (ret)
return ret;

+ if (!ctx->qps)
+ return 0;
+
if (ctx->sec->qm.ver < QM_HW_V3) {
ctx->type_supported = SEC_BD_TYPE2;
ctx->req_op = &sec_skcipher_req_ops;
@@ -1837,7 +1857,7 @@ static int sec_skcipher_ctx_init(struct crypto_skcipher *tfm)
ctx->req_op = &sec_skcipher_req_ops_v3;
}

- return ret;
+ return 0;
}

static void sec_skcipher_ctx_exit(struct crypto_skcipher *tfm)
@@ -1905,7 +1925,7 @@ static int sec_aead_ctx_init(struct crypto_aead *tfm, const char *hash_name)
int ret;

ret = sec_aead_init(tfm);
- if (ret) {
+ if (ret && ret != -ENODEV) {
pr_err("hisi_sec2: aead init error!\n");
return ret;
}
@@ -1947,7 +1967,7 @@ static int sec_aead_xcm_ctx_init(struct crypto_aead *tfm)
int ret;

ret = sec_aead_init(tfm);
- if (ret) {
+ if (ret && ret != -ENODEV) {
dev_err(ctx->dev, "hisi_sec2: aead xcm init error!\n");
return ret;
}
@@ -2092,6 +2112,9 @@ static int sec_skcipher_crypto(struct skcipher_request *sk_req, bool encrypt)
bool need_fallback = false;
int ret;

+ if (!ctx->qps)
+ goto soft_crypto;
+
if (!sk_req->cryptlen) {
if (ctx->c_ctx.c_mode == SEC_CMODE_XTS)
return -EINVAL;
@@ -2108,9 +2131,12 @@ static int sec_skcipher_crypto(struct skcipher_request *sk_req, bool encrypt)
return -EINVAL;

if (unlikely(ctx->c_ctx.fallback || need_fallback))
- return sec_skcipher_soft_crypto(ctx, sk_req, encrypt);
+ goto soft_crypto;

return ctx->req_op->process(ctx, req);
+
+soft_crypto:
+ return sec_skcipher_soft_crypto(ctx, sk_req, encrypt);
}

static int sec_skcipher_encrypt(struct skcipher_request *sk_req)
@@ -2315,6 +2341,9 @@ static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
bool need_fallback = false;
int ret;

+ if (!ctx->qps)
+ goto soft_crypto;
+
req->flag = a_req->base.flags;
req->aead_req.aead_req = a_req;
req->c_req.encrypt = encrypt;
@@ -2324,11 +2353,14 @@ static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
ret = sec_aead_param_check(ctx, req, &need_fallback);
if (unlikely(ret)) {
if (need_fallback)
- return sec_aead_soft_crypto(ctx, a_req, encrypt);
+ goto soft_crypto;
return -EINVAL;
}

return ctx->req_op->process(ctx, req);
+
+soft_crypto:
+ return sec_aead_soft_crypto(ctx, a_req, encrypt);
}

static int sec_aead_encrypt(struct aead_request *a_req)
diff --git a/drivers/crypto/hisilicon/trng/trng.c b/drivers/crypto/hisilicon/trng/trng.c
index 66c551ecdee80..85a4b99f9055a 100644
--- a/drivers/crypto/hisilicon/trng/trng.c
+++ b/drivers/crypto/hisilicon/trng/trng.c
@@ -40,6 +40,7 @@
#define SEED_SHIFT_24 24
#define SEED_SHIFT_16 16
#define SEED_SHIFT_8 8
+#define SW_MAX_RANDOM_BYTES 65520

struct hisi_trng_list {
struct mutex lock;
@@ -53,8 +54,10 @@ struct hisi_trng {
struct list_head list;
struct hwrng rng;
u32 ver;
- bool is_used;
- struct mutex mutex;
+ u32 ctx_num;
+ /* The bytes of the random number generated since the last seeding. */
+ u32 random_bytes;
+ struct mutex lock;
};

struct hisi_trng_ctx {
@@ -63,10 +66,14 @@ struct hisi_trng_ctx {

static atomic_t trng_active_devs;
static struct hisi_trng_list trng_devices;
+static int hisi_trng_read(struct hwrng *rng, void *buf, size_t max, bool wait);

-static void hisi_trng_set_seed(struct hisi_trng *trng, const u8 *seed)
+static int hisi_trng_set_seed(struct hisi_trng *trng, const u8 *seed)
{
u32 val, seed_reg, i;
+ int ret;
+
+ writel(0x0, trng->base + SW_DRBG_BLOCKS);

for (i = 0; i < SW_DRBG_SEED_SIZE;
i += SW_DRBG_SEED_SIZE / SW_DRBG_SEED_REGS_NUM) {
@@ -78,6 +85,20 @@ static void hisi_trng_set_seed(struct hisi_trng *trng, const u8 *seed)
seed_reg = (i >> SW_DRBG_NUM_SHIFT) % SW_DRBG_SEED_REGS_NUM;
writel(val, trng->base + SW_DRBG_SEED(seed_reg));
}
+
+ writel(SW_DRBG_BLOCKS_NUM | (0x1 << SW_DRBG_ENABLE_SHIFT),
+ trng->base + SW_DRBG_BLOCKS);
+ writel(0x1, trng->base + SW_DRBG_INIT);
+ ret = readl_relaxed_poll_timeout(trng->base + SW_DRBG_STATUS,
+ val, val & BIT(0), SLEEP_US, TIMEOUT_US);
+ if (ret) {
+ pr_err("failed to init trng(%d)\n", ret);
+ return -EIO;
+ }
+
+ trng->random_bytes = 0;
+
+ return 0;
}

static int hisi_trng_seed(struct crypto_rng *tfm, const u8 *seed,
@@ -85,8 +106,7 @@ static int hisi_trng_seed(struct crypto_rng *tfm, const u8 *seed,
{
struct hisi_trng_ctx *ctx = crypto_rng_ctx(tfm);
struct hisi_trng *trng = ctx->trng;
- u32 val = 0;
- int ret = 0;
+ int ret;

if (slen < SW_DRBG_SEED_SIZE) {
pr_err("slen(%u) is not matched with trng(%d)\n", slen,
@@ -94,43 +114,45 @@ static int hisi_trng_seed(struct crypto_rng *tfm, const u8 *seed,
return -EINVAL;
}

- writel(0x0, trng->base + SW_DRBG_BLOCKS);
- hisi_trng_set_seed(trng, seed);
+ mutex_lock(&trng->lock);
+ ret = hisi_trng_set_seed(trng, seed);
+ mutex_unlock(&trng->lock);

- writel(SW_DRBG_BLOCKS_NUM | (0x1 << SW_DRBG_ENABLE_SHIFT),
- trng->base + SW_DRBG_BLOCKS);
- writel(0x1, trng->base + SW_DRBG_INIT);
+ return ret;
+}

- ret = readl_relaxed_poll_timeout(trng->base + SW_DRBG_STATUS,
- val, val & BIT(0), SLEEP_US, TIMEOUT_US);
- if (ret)
- pr_err("fail to init trng(%d)\n", ret);
+static int hisi_trng_reseed(struct hisi_trng *trng)
+{
+ u8 seed[SW_DRBG_SEED_SIZE];
+ int size;

- return ret;
+ if (!trng->random_bytes)
+ return 0;
+
+ size = hisi_trng_read(&trng->rng, seed, SW_DRBG_SEED_SIZE, false);
+ if (size != SW_DRBG_SEED_SIZE)
+ return -EIO;
+
+ return hisi_trng_set_seed(trng, seed);
}

-static int hisi_trng_generate(struct crypto_rng *tfm, const u8 *src,
- unsigned int slen, u8 *dstn, unsigned int dlen)
+static int hisi_trng_get_bytes(struct hisi_trng *trng, u8 *dstn, unsigned int dlen)
{
- struct hisi_trng_ctx *ctx = crypto_rng_ctx(tfm);
- struct hisi_trng *trng = ctx->trng;
u32 data[SW_DRBG_DATA_NUM];
u32 currsize = 0;
u32 val = 0;
int ret;
u32 i;

- if (dlen > SW_DRBG_BLOCKS_NUM * SW_DRBG_BYTES || dlen == 0) {
- pr_err("dlen(%u) exceeds limit(%d)!\n", dlen,
- SW_DRBG_BLOCKS_NUM * SW_DRBG_BYTES);
- return -EINVAL;
- }
+ ret = hisi_trng_reseed(trng);
+ if (ret)
+ return ret;

do {
ret = readl_relaxed_poll_timeout(trng->base + SW_DRBG_STATUS,
- val, val & BIT(1), SLEEP_US, TIMEOUT_US);
+ val, val & BIT(1), SLEEP_US, TIMEOUT_US);
if (ret) {
- pr_err("fail to generate random number(%d)!\n", ret);
+ pr_err("failed to generate random number(%d)!\n", ret);
break;
}

@@ -145,30 +167,57 @@ static int hisi_trng_generate(struct crypto_rng *tfm, const u8 *src,
currsize = dlen;
}

+ trng->random_bytes += SW_DRBG_BYTES;
writel(0x1, trng->base + SW_DRBG_GEN);
} while (currsize < dlen);

return ret;
}

+static int hisi_trng_generate(struct crypto_rng *tfm, const u8 *src,
+ unsigned int slen, u8 *dstn, unsigned int dlen)
+{
+ struct hisi_trng_ctx *ctx = crypto_rng_ctx(tfm);
+ struct hisi_trng *trng = ctx->trng;
+ unsigned int currsize = 0;
+ unsigned int block_size;
+ int ret;
+
+ if (!dstn || !dlen) {
+ pr_err("output is error, dlen %u!\n", dlen);
+ return -EINVAL;
+ }
+
+ do {
+ block_size = min_t(unsigned int, dlen - currsize, SW_MAX_RANDOM_BYTES);
+ mutex_lock(&trng->lock);
+ ret = hisi_trng_get_bytes(trng, dstn + currsize, block_size);
+ mutex_unlock(&trng->lock);
+ if (ret)
+ return ret;
+ currsize += block_size;
+ } while (currsize < dlen);
+
+ return 0;
+}
+
static int hisi_trng_init(struct crypto_tfm *tfm)
{
struct hisi_trng_ctx *ctx = crypto_tfm_ctx(tfm);
struct hisi_trng *trng;
- int ret = -EBUSY;
+ u32 ctx_num = ~0;

mutex_lock(&trng_devices.lock);
list_for_each_entry(trng, &trng_devices.list, list) {
- if (!trng->is_used) {
- trng->is_used = true;
+ if (trng->ctx_num < ctx_num) {
+ ctx_num = trng->ctx_num;
ctx->trng = trng;
- ret = 0;
- break;
}
}
+ ctx->trng->ctx_num++;
mutex_unlock(&trng_devices.lock);

- return ret;
+ return 0;
}

static void hisi_trng_exit(struct crypto_tfm *tfm)
@@ -176,7 +225,7 @@ static void hisi_trng_exit(struct crypto_tfm *tfm)
struct hisi_trng_ctx *ctx = crypto_tfm_ctx(tfm);

mutex_lock(&trng_devices.lock);
- ctx->trng->is_used = false;
+ ctx->trng->ctx_num--;
mutex_unlock(&trng_devices.lock);
}

@@ -238,7 +287,7 @@ static int hisi_trng_del_from_list(struct hisi_trng *trng)
int ret = -EBUSY;

mutex_lock(&trng_devices.lock);
- if (!trng->is_used) {
+ if (!trng->ctx_num) {
list_del(&trng->list);
ret = 0;
}
@@ -262,7 +311,9 @@ static int hisi_trng_probe(struct platform_device *pdev)
if (IS_ERR(trng->base))
return PTR_ERR(trng->base);

- trng->is_used = false;
+ trng->ctx_num = 0;
+ trng->random_bytes = SW_MAX_RANDOM_BYTES;
+ mutex_init(&trng->lock);
trng->ver = readl(trng->base + HISI_TRNG_VERSION);
if (!trng_devices.is_init) {
INIT_LIST_HEAD(&trng_devices.list);
diff --git a/drivers/crypto/hisilicon/zip/zip_crypto.c b/drivers/crypto/hisilicon/zip/zip_crypto.c
index 7327f8f29b013..42ac275be36fc 100644
--- a/drivers/crypto/hisilicon/zip/zip_crypto.c
+++ b/drivers/crypto/hisilicon/zip/zip_crypto.c
@@ -39,6 +39,7 @@ enum {
HZIP_CTX_Q_NUM
};

+#define GET_REQ_FROM_SQE(sqe) ((u64)(sqe)->dw26 | (u64)(sqe)->dw27 << 32)
#define COMP_NAME_TO_TYPE(alg_name) \
(!strcmp((alg_name), "deflate") ? HZIP_ALG_TYPE_DEFLATE : 0)

@@ -48,6 +49,7 @@ struct hisi_zip_req {
struct hisi_acc_hw_sgl *hw_dst;
dma_addr_t dma_src;
dma_addr_t dma_dst;
+ struct hisi_zip_qp_ctx *qp_ctx;
u16 req_id;
};

@@ -74,7 +76,6 @@ struct hisi_zip_sqe_ops {
void (*fill_req_type)(struct hisi_zip_sqe *sqe, u8 req_type);
void (*fill_tag)(struct hisi_zip_sqe *sqe, struct hisi_zip_req *req);
void (*fill_sqe_type)(struct hisi_zip_sqe *sqe, u8 sqe_type);
- u32 (*get_tag)(struct hisi_zip_sqe *sqe);
u32 (*get_status)(struct hisi_zip_sqe *sqe);
u32 (*get_dstlen)(struct hisi_zip_sqe *sqe);
};
@@ -131,6 +132,7 @@ static struct hisi_zip_req *hisi_zip_create_req(struct hisi_zip_qp_ctx *qp_ctx,
req_cache = q + req_id;
req_cache->req_id = req_id;
req_cache->req = req;
+ req_cache->qp_ctx = qp_ctx;

return req_cache;
}
@@ -181,7 +183,8 @@ static void hisi_zip_fill_req_type(struct hisi_zip_sqe *sqe, u8 req_type)

static void hisi_zip_fill_tag(struct hisi_zip_sqe *sqe, struct hisi_zip_req *req)
{
- sqe->dw26 = req->req_id;
+ sqe->dw26 = lower_32_bits((u64)req);
+ sqe->dw27 = upper_32_bits((u64)req);
}

static void hisi_zip_fill_sqe_type(struct hisi_zip_sqe *sqe, u8 sqe_type)
@@ -236,7 +239,7 @@ static int hisi_zip_do_work(struct hisi_zip_qp_ctx *qp_ctx,
&req->dma_dst);
if (IS_ERR(req->hw_dst)) {
ret = PTR_ERR(req->hw_dst);
- dev_err(dev, "failed to map the dst buffer to hw slg (%d)!\n",
+ dev_err(dev, "failed to map the dst buffer to hw sgl (%d)!\n",
ret);
goto err_unmap_input;
}
@@ -264,11 +267,6 @@ static int hisi_zip_do_work(struct hisi_zip_qp_ctx *qp_ctx,
return ret;
}

-static u32 hisi_zip_get_tag(struct hisi_zip_sqe *sqe)
-{
- return sqe->dw26;
-}
-
static u32 hisi_zip_get_status(struct hisi_zip_sqe *sqe)
{
return sqe->dw3 & HZIP_BD_STATUS_M;
@@ -281,14 +279,12 @@ static u32 hisi_zip_get_dstlen(struct hisi_zip_sqe *sqe)

static void hisi_zip_acomp_cb(struct hisi_qp *qp, void *data)
{
- struct hisi_zip_qp_ctx *qp_ctx = qp->qp_ctx;
+ struct hisi_zip_sqe *sqe = data;
+ struct hisi_zip_req *req = (struct hisi_zip_req *)GET_REQ_FROM_SQE(sqe);
+ struct hisi_zip_qp_ctx *qp_ctx = req->qp_ctx;
const struct hisi_zip_sqe_ops *ops = qp_ctx->ctx->ops;
struct hisi_zip_dfx *dfx = &qp_ctx->zip_dev->dfx;
- struct hisi_zip_req_q *req_q = &qp_ctx->req_q;
struct device *dev = &qp->qm->pdev->dev;
- struct hisi_zip_sqe *sqe = data;
- u32 tag = ops->get_tag(sqe);
- struct hisi_zip_req *req = req_q->q + tag;
struct acomp_req *acomp_req = req->req;
int err = 0;
u32 status;
@@ -392,7 +388,6 @@ static const struct hisi_zip_sqe_ops hisi_zip_ops = {
.fill_req_type = hisi_zip_fill_req_type,
.fill_tag = hisi_zip_fill_tag,
.fill_sqe_type = hisi_zip_fill_sqe_type,
- .get_tag = hisi_zip_get_tag,
.get_status = hisi_zip_get_status,
.get_dstlen = hisi_zip_get_dstlen,
};
@@ -580,7 +575,6 @@ static void hisi_zip_acomp_exit(struct crypto_acomp *tfm)
{
struct hisi_zip_ctx *ctx = crypto_tfm_ctx(&tfm->base);

- hisi_zip_set_acomp_cb(ctx, NULL);
hisi_zip_release_sgl_pool(ctx);
hisi_zip_release_req_q(ctx);
hisi_zip_ctx_exit(ctx);
diff --git a/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c b/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c
index b9b5e744a3f16..af8dbc7517cf8 100644
--- a/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c
+++ b/drivers/crypto/intel/qat/qat_common/adf_pfvf_pf_proto.c
@@ -148,6 +148,16 @@ static struct pfvf_message handle_blkmsg_req(struct adf_accel_vf_info *vf_info,
blk_byte = FIELD_GET(ADF_VF2PF_SMALL_BLOCK_BYTE_MASK, req.data);
byte_max = ADF_VF2PF_SMALL_BLOCK_BYTE_MAX;
break;
+ default:
+ dev_err(&GET_DEV(vf_info->accel_dev),
+ "Invalid BlockMsg type 0x%.4x received from VF%u\n",
+ req.type, vf_info->vf_nr);
+ resp.type = ADF_PF2VF_MSGTYPE_BLKMSG_RESP;
+ resp.data = FIELD_PREP(ADF_PF2VF_BLKMSG_RESP_TYPE_MASK,
+ ADF_PF2VF_BLKMSG_RESP_TYPE_ERROR) |
+ FIELD_PREP(ADF_PF2VF_BLKMSG_RESP_DATA_MASK,
+ ADF_PF2VF_UNSPECIFIED_ERROR);
+ return resp;
}

/* Is this a request for CRC or data? */
diff --git a/drivers/crypto/marvell/octeontx/otx_cptvf_main.c b/drivers/crypto/marvell/octeontx/otx_cptvf_main.c
index 88a41d1ca5f64..6c0bfb3ea1c9f 100644
--- a/drivers/crypto/marvell/octeontx/otx_cptvf_main.c
+++ b/drivers/crypto/marvell/octeontx/otx_cptvf_main.c
@@ -168,7 +168,8 @@ static void free_command_queues(struct otx_cptvf *cptvf,
chunk = list_first_entry(&cqinfo->queue[i].chead,
struct otx_cpt_cmd_chunk, nextchunk);

- dma_free_coherent(&pdev->dev, chunk->size,
+ dma_free_coherent(&pdev->dev,
+ chunk->size + OTX_CPT_NEXT_CHUNK_PTR_SIZE,
chunk->head,
chunk->dma_addr);
chunk->head = NULL;
diff --git a/drivers/crypto/starfive/jh7110-aes.c b/drivers/crypto/starfive/jh7110-aes.c
index 86a1a1fa9f8f9..04f2f97ce238a 100644
--- a/drivers/crypto/starfive/jh7110-aes.c
+++ b/drivers/crypto/starfive/jh7110-aes.c
@@ -673,8 +673,10 @@ static int starfive_aes_aead_do_one_req(struct crypto_engine *engine, void *areq
"Failed to alloc memory for adata");

if (sg_copy_to_buffer(req->src, sg_nents_for_len(req->src, cryp->assoclen),
- rctx->adata, cryp->assoclen) != cryp->assoclen)
+ rctx->adata, cryp->assoclen) != cryp->assoclen) {
+ kfree(rctx->adata);
return -EINVAL;
+ }
}

if (cryp->total_in)
@@ -685,8 +687,11 @@ static int starfive_aes_aead_do_one_req(struct crypto_engine *engine, void *areq
ctx->rctx = rctx;

ret = starfive_aes_hw_init(ctx);
- if (ret)
+ if (ret) {
+ if (cryp->assoclen)
+ kfree(rctx->adata);
return ret;
+ }

if (!cryp->assoclen)
goto write_text;
diff --git a/drivers/cxl/core/hdm.c b/drivers/cxl/core/hdm.c
index 223c273c0cd17..fde609f37e562 100644
--- a/drivers/cxl/core/hdm.c
+++ b/drivers/cxl/core/hdm.c
@@ -699,14 +699,13 @@ static int cxl_decoder_commit(struct cxl_decoder *cxld)
writel(ctrl, hdm + CXL_HDM_DECODER0_CTRL_OFFSET(id));
up_read(&cxl_dpa_rwsem);

- port->commit_end++;
rc = cxld_await_commit(hdm, cxld->id);
if (rc) {
dev_dbg(&port->dev, "%s: error %d committing decoder\n",
dev_name(&cxld->dev), rc);
- cxld->reset(cxld);
return rc;
}
+ port->commit_end++;
cxld->flags |= CXL_DECODER_F_ENABLE;

return 0;
diff --git a/drivers/dma/dma-axi-dmac.c b/drivers/dma/dma-axi-dmac.c
index 36943b0c6d603..47d95d2d743b1 100644
--- a/drivers/dma/dma-axi-dmac.c
+++ b/drivers/dma/dma-axi-dmac.c
@@ -233,11 +233,9 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
unsigned int flags = 0;
unsigned int val;

- if (!chan->hw_sg) {
- val = axi_dmac_read(dmac, AXI_DMAC_REG_START_TRANSFER);
- if (val) /* Queue is full, wait for the next SOT IRQ */
- return;
- }
+ val = axi_dmac_read(dmac, AXI_DMAC_REG_START_TRANSFER);
+ if (val) /* Queue is full, wait for the next SOT IRQ */
+ return;

desc = chan->next_desc;

@@ -247,6 +245,7 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
return;
list_move_tail(&vdesc->node, &chan->active_descs);
desc = to_axi_dmac_desc(vdesc);
+ chan->next_desc = desc;
}
sg = &desc->sg[desc->num_submitted];

@@ -265,8 +264,6 @@ static void axi_dmac_start_transfer(struct axi_dmac_chan *chan)
else
chan->next_desc = NULL;
flags |= AXI_DMAC_FLAG_LAST;
- } else {
- chan->next_desc = desc;
}

sg->hw->id = axi_dmac_read(dmac, AXI_DMAC_REG_TRANSFER_ID);
diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
index 4794d58dab556..540b47c520dce 100644
--- a/drivers/dma/fsl-edma-main.c
+++ b/drivers/dma/fsl-edma-main.c
@@ -708,7 +708,6 @@ static void fsl_edma_remove(struct platform_device *pdev)
of_dma_controller_free(np);
dma_async_device_unregister(&fsl_edma->dma_dev);
fsl_edma_cleanup_vchan(&fsl_edma->dma_dev);
- fsl_disable_clocks(fsl_edma, fsl_edma->drvdata->dmamuxs);
}

static int fsl_edma_suspend_late(struct device *dev)
diff --git a/drivers/dma/mediatek/mtk-uart-apdma.c b/drivers/dma/mediatek/mtk-uart-apdma.c
index 1bdc1500be40f..3e7f8acf41dd0 100644
--- a/drivers/dma/mediatek/mtk-uart-apdma.c
+++ b/drivers/dma/mediatek/mtk-uart-apdma.c
@@ -41,7 +41,7 @@
#define VFF_STOP_CLR_B 0
#define VFF_EN_CLR_B 0
#define VFF_INT_EN_CLR_B 0
-#define VFF_4G_SUPPORT_CLR_B 0
+#define VFF_ADDR2_CLR_B 0

/*
* interrupt trigger level for tx
@@ -72,7 +72,7 @@
/* TX: the buffer size SW can write. RX: the buffer size HW can write. */
#define VFF_LEFT_SIZE 0x40
#define VFF_DEBUG_STATUS 0x50
-#define VFF_4G_SUPPORT 0x54
+#define VFF_ADDR2 0x54

struct mtk_uart_apdmadev {
struct dma_device ddev;
@@ -149,7 +149,7 @@ static void mtk_uart_apdma_start_tx(struct mtk_chan *c)
mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_TX_INT_CLR_B);

if (mtkd->support_33bits)
- mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_EN_B);
+ mtk_uart_apdma_write(c, VFF_ADDR2, upper_32_bits(d->addr));
}

mtk_uart_apdma_write(c, VFF_EN, VFF_EN_B);
@@ -192,7 +192,7 @@ static void mtk_uart_apdma_start_rx(struct mtk_chan *c)
mtk_uart_apdma_write(c, VFF_INT_FLAG, VFF_RX_INT_CLR_B);

if (mtkd->support_33bits)
- mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_EN_B);
+ mtk_uart_apdma_write(c, VFF_ADDR2, upper_32_bits(d->addr));
}

mtk_uart_apdma_write(c, VFF_INT_EN, VFF_RX_INT_EN_B);
@@ -298,7 +298,7 @@ static int mtk_uart_apdma_alloc_chan_resources(struct dma_chan *chan)
}

if (mtkd->support_33bits)
- mtk_uart_apdma_write(c, VFF_4G_SUPPORT, VFF_4G_SUPPORT_CLR_B);
+ mtk_uart_apdma_write(c, VFF_ADDR2, VFF_ADDR2_CLR_B);

err_pm:
pm_runtime_put_noidle(mtkd->ddev.dev);
diff --git a/drivers/dma/stm32/stm32-dma3.c b/drivers/dma/stm32/stm32-dma3.c
index 0be6e944df6fd..fde1b10d3532c 100644
--- a/drivers/dma/stm32/stm32-dma3.c
+++ b/drivers/dma/stm32/stm32-dma3.c
@@ -1835,12 +1835,7 @@ static struct platform_driver stm32_dma3_driver = {
},
};

-static int __init stm32_dma3_init(void)
-{
- return platform_driver_register(&stm32_dma3_driver);
-}
-
-subsys_initcall(stm32_dma3_init);
+module_platform_driver(stm32_dma3_driver);

MODULE_DESCRIPTION("STM32 DMA3 controller driver");
MODULE_AUTHOR("Amelie Delaunay <amelie.delaunay@xxxxxxxxxxx>");
diff --git a/drivers/dma/stm32/stm32-mdma.c b/drivers/dma/stm32/stm32-mdma.c
index e6d525901de7e..3560a6945fbfc 100644
--- a/drivers/dma/stm32/stm32-mdma.c
+++ b/drivers/dma/stm32/stm32-mdma.c
@@ -731,7 +731,7 @@ static int stm32_mdma_setup_xfer(struct stm32_mdma_chan *chan,
struct stm32_mdma_chan_config *chan_config = &chan->chan_config;
struct scatterlist *sg;
dma_addr_t src_addr, dst_addr;
- u32 m2m_hw_period, ccr, ctcr, ctbr;
+ u32 m2m_hw_period = 0, ccr = 0, ctcr, ctbr;
int i, ret = 0;

if (chan_config->m2m_hw)
diff --git a/drivers/dma/sun6i-dma.c b/drivers/dma/sun6i-dma.c
index 583bf49031cf2..c97027379263d 100644
--- a/drivers/dma/sun6i-dma.c
+++ b/drivers/dma/sun6i-dma.c
@@ -582,6 +582,22 @@ static irqreturn_t sun6i_dma_interrupt(int irq, void *dev_id)
return ret;
}

+static u32 find_burst_size(const u32 burst_lengths, u32 maxburst)
+{
+ if (!maxburst)
+ return 1;
+
+ if (BIT(maxburst) & burst_lengths)
+ return maxburst;
+
+ /* Hardware only does power-of-two bursts. */
+ for (u32 burst = rounddown_pow_of_two(maxburst); burst > 0; burst /= 2)
+ if (BIT(burst) & burst_lengths)
+ return burst;
+
+ return 1;
+}
+
static int set_config(struct sun6i_dma_dev *sdev,
struct dma_slave_config *sconfig,
enum dma_transfer_direction direction,
@@ -615,15 +631,13 @@ static int set_config(struct sun6i_dma_dev *sdev,
return -EINVAL;
if (!(BIT(dst_addr_width) & sdev->slave.dst_addr_widths))
return -EINVAL;
- if (!(BIT(src_maxburst) & sdev->cfg->src_burst_lengths))
- return -EINVAL;
- if (!(BIT(dst_maxburst) & sdev->cfg->dst_burst_lengths))
- return -EINVAL;

src_width = convert_buswidth(src_addr_width);
dst_width = convert_buswidth(dst_addr_width);
- dst_burst = convert_burst(dst_maxburst);
- src_burst = convert_burst(src_maxburst);
+ src_burst = find_burst_size(sdev->cfg->src_burst_lengths, src_maxburst);
+ dst_burst = find_burst_size(sdev->cfg->dst_burst_lengths, dst_maxburst);
+ dst_burst = convert_burst(dst_burst);
+ src_burst = convert_burst(src_burst);

*p_cfg = DMA_CHAN_CFG_SRC_WIDTH(src_width) |
DMA_CHAN_CFG_DST_WIDTH(dst_width);
diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
index 3bb851e1e608a..6c5fb06976415 100644
--- a/drivers/edac/altera_edac.c
+++ b/drivers/edac/altera_edac.c
@@ -1563,8 +1563,7 @@ static int altr_portb_setup(struct altr_edac_device_dev *device)
goto err_release_group_1;
}
rc = devm_request_irq(&altdev->ddev, altdev->sb_irq,
- prv->ecc_irq_handler,
- IRQF_ONESHOT | IRQF_TRIGGER_HIGH,
+ prv->ecc_irq_handler, IRQF_TRIGGER_HIGH,
ecc_name, altdev);
if (rc) {
edac_printk(KERN_ERR, EDAC_DEVICE, "PortB SBERR IRQ error\n");
@@ -1587,8 +1586,7 @@ static int altr_portb_setup(struct altr_edac_device_dev *device)
goto err_release_group_1;
}
rc = devm_request_irq(&altdev->ddev, altdev->db_irq,
- prv->ecc_irq_handler,
- IRQF_ONESHOT | IRQF_TRIGGER_HIGH,
+ prv->ecc_irq_handler, IRQF_TRIGGER_HIGH,
ecc_name, altdev);
if (rc) {
edac_printk(KERN_ERR, EDAC_DEVICE, "PortB DBERR IRQ error\n");
@@ -1970,8 +1968,7 @@ static int altr_edac_a10_device_add(struct altr_arria10_edac *edac,
goto err_release_group1;
}
rc = devm_request_irq(edac->dev, altdev->sb_irq, prv->ecc_irq_handler,
- IRQF_ONESHOT | IRQF_TRIGGER_HIGH,
- ecc_name, altdev);
+ IRQF_TRIGGER_HIGH, ecc_name, altdev);
if (rc) {
edac_printk(KERN_ERR, EDAC_DEVICE, "No SBERR IRQ resource\n");
goto err_release_group1;
@@ -1993,7 +1990,7 @@ static int altr_edac_a10_device_add(struct altr_arria10_edac *edac,
goto err_release_group1;
}
rc = devm_request_irq(edac->dev, altdev->db_irq, prv->ecc_irq_handler,
- IRQF_ONESHOT | IRQF_TRIGGER_HIGH,
+ IRQF_TRIGGER_HIGH,
ecc_name, altdev);
if (rc) {
edac_printk(KERN_ERR, EDAC_DEVICE, "No DBERR IRQ resource\n");
diff --git a/drivers/edac/i5000_edac.c b/drivers/edac/i5000_edac.c
index 4b5a71f8739d9..8c6a291e01f6a 100644
--- a/drivers/edac/i5000_edac.c
+++ b/drivers/edac/i5000_edac.c
@@ -1111,6 +1111,7 @@ static void calculate_dimm_size(struct i5000_pvt *pvt)

n = snprintf(p, space, " ");
p += n;
+ space -= n;
for (branch = 0; branch < MAX_BRANCHES; branch++) {
n = snprintf(p, space, " branch %d | ", branch);
p += n;
diff --git a/drivers/edac/i5400_edac.c b/drivers/edac/i5400_edac.c
index 49b4499269fb7..68afb3bb8e290 100644
--- a/drivers/edac/i5400_edac.c
+++ b/drivers/edac/i5400_edac.c
@@ -1025,13 +1025,13 @@ static void calculate_dimm_size(struct i5400_pvt *pvt)
space -= n;
}

- space -= n;
edac_dbg(2, "%s\n", mem_buffer);
p = mem_buffer;
space = PAGE_SIZE;

n = snprintf(p, space, " ");
p += n;
+ space -= n;
for (branch = 0; branch < MAX_BRANCHES; branch++) {
n = snprintf(p, space, " branch %d | ", branch);
p += n;
diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
index 9fdfccbc6479a..9516ee870cd25 100644
--- a/drivers/firmware/arm_ffa/driver.c
+++ b/drivers/firmware/arm_ffa/driver.c
@@ -895,10 +895,27 @@ static void __do_sched_recv_cb(u16 part_id, u16 vcpu, bool is_per_vcpu)
callback(vcpu, is_per_vcpu, cb_data);
}

+/*
+ * Map logical ID index to the u16 index within the packed ID list.
+ *
+ * For native responses (FF-A width == kernel word size), IDs are
+ * tightly packed: idx -> idx.
+ *
+ * For 32-bit responses on a 64-bit kernel, each 64-bit register
+ * contributes 4 x u16 values but only the lower 2 are defined; the
+ * upper 2 are garbage. This mapping skips those upper halves:
+ * 0,1,2,3,4,5,... -> 0,1,4,5,8,9,...
+ */
+static int list_idx_to_u16_idx(int idx, bool is_native_resp)
+{
+ return is_native_resp ? idx : idx + 2 * (idx >> 1);
+}
+
static void ffa_notification_info_get(void)
{
- int idx, list, max_ids, lists_cnt, ids_processed, ids_count[MAX_IDS_64];
- bool is_64b_resp;
+ int ids_processed, ids_count[MAX_IDS_64];
+ int idx, list, max_ids, lists_cnt;
+ bool is_64b_resp, is_native_resp;
ffa_value_t ret;
u64 id_list;

@@ -915,6 +932,7 @@ static void ffa_notification_info_get(void)
}

is_64b_resp = (ret.a0 == FFA_FN64_SUCCESS);
+ is_native_resp = (ret.a0 == FFA_FN_NATIVE(SUCCESS));

ids_processed = 0;
lists_cnt = FIELD_GET(NOTIFICATION_INFO_GET_ID_COUNT, ret.a2);
@@ -931,12 +949,16 @@ static void ffa_notification_info_get(void)

/* Process IDs */
for (list = 0; list < lists_cnt; list++) {
+ int u16_idx;
u16 vcpu_id, part_id, *packed_id_list = (u16 *)&ret.a3;

if (ids_processed >= max_ids - 1)
break;

- part_id = packed_id_list[ids_processed++];
+ u16_idx = list_idx_to_u16_idx(ids_processed,
+ is_native_resp);
+ part_id = packed_id_list[u16_idx];
+ ids_processed++;

if (ids_count[list] == 1) { /* Global Notification */
__do_sched_recv_cb(part_id, 0, false);
@@ -948,7 +970,10 @@ static void ffa_notification_info_get(void)
if (ids_processed >= max_ids - 1)
break;

- vcpu_id = packed_id_list[ids_processed++];
+ u16_idx = list_idx_to_u16_idx(ids_processed,
+ is_native_resp);
+ vcpu_id = packed_id_list[u16_idx];
+ ids_processed++;

__do_sched_recv_cb(part_id, vcpu_id, true);
}
@@ -1807,6 +1832,7 @@ static int __init ffa_init(void)

cleanup_notifs:
ffa_notifications_cleanup();
+ ffa_rxtx_unmap(drv_info->vm_id);
free_pages:
if (drv_info->tx_buffer)
free_pages_exact(drv_info->tx_buffer, rxtx_bufsz);
diff --git a/drivers/firmware/efi/cper-arm.c b/drivers/firmware/efi/cper-arm.c
index 52d18490b59e3..70e1735dfcdd4 100644
--- a/drivers/firmware/efi/cper-arm.c
+++ b/drivers/firmware/efi/cper-arm.c
@@ -226,7 +226,8 @@ static void cper_print_arm_err_info(const char *pfx, u32 type,
}

void cper_print_proc_arm(const char *pfx,
- const struct cper_sec_proc_arm *proc)
+ const struct cper_sec_proc_arm *proc,
+ u32 length)
{
int i, len, max_ctx_type;
struct cper_arm_err_info *err_info;
@@ -238,9 +239,12 @@ void cper_print_proc_arm(const char *pfx,

len = proc->section_length - (sizeof(*proc) +
proc->err_info_num * (sizeof(*err_info)));
- if (len < 0) {
- printk("%ssection length: %d\n", pfx, proc->section_length);
- printk("%ssection length is too small\n", pfx);
+
+ if (len < 0 || proc->section_length > length) {
+ printk("%ssection length: %d, CPER size: %d\n",
+ pfx, proc->section_length, length);
+ printk("%ssection length is too %s\n", pfx,
+ (len < 0) ? "small" : "big");
printk("%sfirmware-generated error record is incorrect\n", pfx);
printk("%sERR_INFO_NUM is %d\n", pfx, proc->err_info_num);
return;
diff --git a/drivers/firmware/efi/cper.c b/drivers/firmware/efi/cper.c
index 16b27fe9608fd..3587295cd0206 100644
--- a/drivers/firmware/efi/cper.c
+++ b/drivers/firmware/efi/cper.c
@@ -560,6 +560,11 @@ static void cper_print_fw_err(const char *pfx,
} else {
offset = sizeof(*fw_err);
}
+ if (offset > length) {
+ printk("%s""error section length is too small: offset=%d, length=%d\n",
+ pfx, offset, length);
+ return;
+ }

buf += offset;
length -= offset;
@@ -659,7 +664,8 @@ cper_estatus_print_section(const char *pfx, struct acpi_hest_generic_data *gdata

printk("%ssection_type: ARM processor error\n", newpfx);
if (gdata->error_data_length >= sizeof(*arm_err))
- cper_print_proc_arm(newpfx, arm_err);
+ cper_print_proc_arm(newpfx, arm_err,
+ gdata->error_data_length);
else
goto err_section_too_small;
#endif
diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index acabc856fe8a5..0d1a65879a358 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -667,13 +667,13 @@ static __init int match_config_table(const efi_guid_t *guid,

static __init void reserve_unaccepted(struct efi_unaccepted_memory *unaccepted)
{
- phys_addr_t start, size;
+ phys_addr_t start, end;

start = PAGE_ALIGN_DOWN(efi.unaccepted);
- size = PAGE_ALIGN(sizeof(*unaccepted) + unaccepted->size);
+ end = PAGE_ALIGN(efi.unaccepted + sizeof(*unaccepted) + unaccepted->size);

- memblock_add(start, size);
- memblock_reserve(start, size);
+ memblock_add(start, end - start);
+ memblock_reserve(start, end - start);
}

int __init efi_config_parse_tables(const efi_config_table_t *config_tables,
diff --git a/drivers/fpga/dfl.c b/drivers/fpga/dfl.c
index c406b949026fa..c6bad1d2d2ae4 100644
--- a/drivers/fpga/dfl.c
+++ b/drivers/fpga/dfl.c
@@ -2029,7 +2029,7 @@ static void __exit dfl_fpga_exit(void)
bus_unregister(&dfl_bus_type);
}

-module_init(dfl_fpga_init);
+subsys_initcall(dfl_fpga_init);
module_exit(dfl_fpga_exit);

MODULE_DESCRIPTION("FPGA Device Feature List (DFL) Support");
diff --git a/drivers/fpga/of-fpga-region.c b/drivers/fpga/of-fpga-region.c
index 8526a5a86f0cb..efc35dad6aaa3 100644
--- a/drivers/fpga/of-fpga-region.c
+++ b/drivers/fpga/of-fpga-region.c
@@ -83,7 +83,7 @@ static struct fpga_manager *of_fpga_region_get_mgr(struct device_node *np)
* done with the bridges.
*
* Return: 0 for success (even if there are no bridges specified)
- * or -EBUSY if any of the bridges are in use.
+ * or an error code if any of the bridges are not available.
*/
static int of_fpga_region_get_bridges(struct fpga_region *region)
{
@@ -130,10 +130,10 @@ static int of_fpga_region_get_bridges(struct fpga_region *region)
&region->bridge_list);
of_node_put(br);

- /* If any of the bridges are in use, give up */
- if (ret == -EBUSY) {
+ /* If any of the bridges are not available, give up */
+ if (ret) {
fpga_bridges_put(&region->bridge_list);
- return -EBUSY;
+ return ret;
}
}

diff --git a/drivers/gpio/gpio-aspeed-sgpio.c b/drivers/gpio/gpio-aspeed-sgpio.c
index 72755fee64784..e69a6ce7be3fb 100644
--- a/drivers/gpio/gpio-aspeed-sgpio.c
+++ b/drivers/gpio/gpio-aspeed-sgpio.c
@@ -534,7 +534,7 @@ static const struct of_device_id aspeed_sgpio_of_table[] = {

MODULE_DEVICE_TABLE(of, aspeed_sgpio_of_table);

-static int __init aspeed_sgpio_probe(struct platform_device *pdev)
+static int aspeed_sgpio_probe(struct platform_device *pdev)
{
u32 nr_gpios, sgpio_freq, sgpio_clk_div, gpio_cnt_regval, pin_mask;
const struct aspeed_sgpio_pdata *pdata;
@@ -629,11 +629,12 @@ static int __init aspeed_sgpio_probe(struct platform_device *pdev)
}

static struct platform_driver aspeed_sgpio_driver = {
+ .probe = aspeed_sgpio_probe,
.driver = {
.name = KBUILD_MODNAME,
.of_match_table = aspeed_sgpio_of_table,
},
};

-module_platform_driver_probe(aspeed_sgpio_driver, aspeed_sgpio_probe);
+module_platform_driver(aspeed_sgpio_driver);
MODULE_DESCRIPTION("Aspeed Serial GPIO Driver");
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
index bebfbc1497d8e..d484804181756 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
@@ -1132,8 +1132,10 @@ static int amdgpu_acpi_enumerate_xcc(void)
if (!dev_info)
ret = amdgpu_acpi_dev_init(&dev_info, xcc_info, sbdf);

- if (ret == -ENOMEM)
+ if (ret == -ENOMEM) {
+ kfree(xcc_info);
return ret;
+ }

if (!dev_info) {
kfree(xcc_info);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 1cf90557b310b..cab75f5c9f2fd 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -2963,7 +2963,8 @@ static int amdgpu_device_ip_init(struct amdgpu_device *adev)
if (r)
goto init_failed;

- if (adev->mman.buffer_funcs_ring->sched.ready)
+ if (adev->mman.buffer_funcs_ring &&
+ adev->mman.buffer_funcs_ring->sched.ready)
amdgpu_ttm_set_buffer_funcs_status(adev, true);

/* Don't init kfd if whole hive need to be reset during init */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
index e09db65880e1a..f62ac736ed76c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
@@ -2895,6 +2895,7 @@ int amdgpu_discovery_set_ip_blocks(struct amdgpu_device *adev)
case IP_VERSION(6, 0, 0):
case IP_VERSION(6, 0, 1):
case IP_VERSION(6, 1, 0):
+ case IP_VERSION(6, 1, 1):
adev->hdp.funcs = &hdp_v6_0_funcs;
break;
case IP_VERSION(7, 0, 0):
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 9d79f1cc6a19b..951ab989d668b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -418,8 +418,15 @@ amdgpu_dma_buf_move_notify(struct dma_buf_attachment *attach)
r = dma_resv_reserve_fences(resv, 2);
if (!r)
r = amdgpu_vm_clear_freed(adev, vm, NULL);
+
+ /* Don't pass 'ticket' to amdgpu_vm_handle_moved: we want the clear=true
+ * path to be used otherwise we might update the PT of another process
+ * while it's using the BO.
+ * With clear=true, amdgpu_vm_bo_update will sync to command submission
+ * from the same VM.
+ */
if (!r)
- r = amdgpu_vm_handle_moved(adev, vm, ticket);
+ r = amdgpu_vm_handle_moved(adev, vm, NULL);

if (r && r != -EBUSY)
DRM_ERROR("Failed to invalidate VM page tables (%d))\n",
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 48de2f088a3b9..bf706ee2e0ed3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -3081,7 +3081,6 @@ static int __init amdgpu_init(void)
if (r)
goto error_fence;

- DRM_INFO("amdgpu kernel modesetting enabled.\n");
amdgpu_register_atpx_handler();
amdgpu_acpi_detect();

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
index f159641271672..564c68c9277b3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
@@ -974,6 +974,16 @@ void amdgpu_gmc_get_vbios_allocations(struct amdgpu_device *adev)
case CHIP_RENOIR:
adev->mman.keep_stolen_vga_memory = true;
break;
+ case CHIP_POLARIS10:
+ case CHIP_POLARIS11:
+ case CHIP_POLARIS12:
+ /* MacBookPros with switchable graphics put VRAM at 0 when
+ * the iGPU is enabled which results in cursor issues if
+ * the cursor ends up at 0. Reserve vram at 0 in that case.
+ */
+ if (adev->gmc.vram_start == 0)
+ adev->mman.keep_stolen_vga_memory = true;
+ break;
default:
adev->mman.keep_stolen_vga_memory = false;
break;
@@ -1260,7 +1270,7 @@ int amdgpu_gmc_get_nps_memranges(struct amdgpu_device *adev,
}

err:
- kfree(ranges);
+ kvfree(ranges);

return ret;
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 7e6057a6e7f17..ba9a9adca0bff 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -132,7 +132,8 @@ static enum drm_gpu_sched_stat amdgpu_job_timedout(struct drm_sched_job *s_job)
amdgpu_vm_put_task_info(ti);
}

- dma_fence_set_error(&s_job->s_fence->finished, -ETIME);
+ if (dma_fence_get_status(&s_job->s_fence->finished) == 0)
+ dma_fence_set_error(&s_job->s_fence->finished, -ETIME);

/* attempt a per ring reset */
if (amdgpu_gpu_recovery &&
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
index fa84208eed18e..26260873f6a15 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
@@ -694,7 +694,7 @@ psp_cmd_submit_buf(struct psp_context *psp,
ras_intr = amdgpu_ras_intr_triggered();
if (ras_intr)
break;
- usleep_range(10, 100);
+ usleep_range(60, 100);
amdgpu_device_invalidate_hdp(psp->adev, NULL);
}

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
index d9cdc89d4cde1..bf563904b3fa9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ras.c
@@ -3643,7 +3643,7 @@ int amdgpu_ras_init(struct amdgpu_device *adev)
* to handle fatal error */
r = amdgpu_nbio_ras_sw_init(adev);
if (r)
- return r;
+ goto release_con;

if (adev->nbio.ras &&
adev->nbio.ras->init_ras_controller_interrupt) {
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
index 9247cd7b1868c..78af6e9282256 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_virt.c
@@ -923,6 +923,7 @@ bool amdgpu_virt_fw_load_skip_check(struct amdgpu_device *adev, uint32_t ucode_i
|| ucode_id == AMDGPU_UCODE_ID_SDMA5
|| ucode_id == AMDGPU_UCODE_ID_SDMA6
|| ucode_id == AMDGPU_UCODE_ID_SDMA7
+ || ucode_id == AMDGPU_UCODE_ID_SDMA_RS64
|| ucode_id == AMDGPU_UCODE_ID_RLC_G
|| ucode_id == AMDGPU_UCODE_ID_RLC_RESTORE_LIST_CNTL
|| ucode_id == AMDGPU_UCODE_ID_RLC_RESTORE_LIST_GPM_MEM
diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
index f085fdaafae00..9479bf9ea30fe 100644
--- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
@@ -1913,7 +1913,8 @@ static int vcn_v2_0_start_sriov(struct amdgpu_device *adev)
struct mmsch_v2_0_cmd_end end = { {0} };
struct mmsch_v2_0_init_header *header;
uint32_t *init_table = adev->virt.mm_table.cpu_addr;
- uint8_t i = 0;
+
+ /* This path only programs VCN instance 0. */

header = (struct mmsch_v2_0_init_header *)init_table;
direct_wt.cmd_header.command_type = MMSCH_COMMAND__DIRECT_REG_WRITE;
@@ -1932,93 +1933,93 @@ static int vcn_v2_0_start_sriov(struct amdgpu_device *adev)
size = AMDGPU_GPU_PAGE_ALIGN(adev->vcn.fw[0]->size + 4);

MMSCH_V2_0_INSERT_DIRECT_RD_MOD_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_STATUS),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_STATUS),
0xFFFFFFFF, 0x00000004);

/* mc resume*/
if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) {
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_lo);
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
adev->firmware.ucode[AMDGPU_UCODE_ID_VCN].tmr_mc_addr_hi);
offset = 0;
} else {
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW),
lower_32_bits(adev->vcn.inst->gpu_addr));
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH),
upper_32_bits(adev->vcn.inst->gpu_addr));
offset = size;
}

MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_OFFSET0),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_OFFSET0),
0);
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_SIZE0),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_SIZE0),
size);

MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW),
lower_32_bits(adev->vcn.inst->gpu_addr + offset));
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH),
upper_32_bits(adev->vcn.inst->gpu_addr + offset));
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_OFFSET1),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_OFFSET1),
0);
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_SIZE1),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_SIZE1),
AMDGPU_VCN_STACK_SIZE);

MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW),
lower_32_bits(adev->vcn.inst->gpu_addr + offset +
AMDGPU_VCN_STACK_SIZE));
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH),
upper_32_bits(adev->vcn.inst->gpu_addr + offset +
AMDGPU_VCN_STACK_SIZE));
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_OFFSET2),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_OFFSET2),
0);
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_VCPU_CACHE_SIZE2),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_VCPU_CACHE_SIZE2),
AMDGPU_VCN_CONTEXT_SIZE);

for (r = 0; r < adev->vcn.num_enc_rings; ++r) {
ring = &adev->vcn.inst->ring_enc[r];
ring->wptr = 0;
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_RB_BASE_LO),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_LO),
lower_32_bits(ring->gpu_addr));
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_RB_BASE_HI),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_BASE_HI),
upper_32_bits(ring->gpu_addr));
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_RB_SIZE),
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_RB_SIZE),
ring->ring_size / 4);
}

ring = &adev->vcn.inst->ring_dec;
ring->wptr = 0;
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_RBC_RB_64BIT_BAR_LOW),
lower_32_bits(ring->gpu_addr));
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i,
+ SOC15_REG_OFFSET(UVD, 0,
mmUVD_LMI_RBC_RB_64BIT_BAR_HIGH),
upper_32_bits(ring->gpu_addr));
/* force RBC into idle state */
@@ -2029,7 +2030,7 @@ static int vcn_v2_0_start_sriov(struct amdgpu_device *adev)
tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_NO_UPDATE, 1);
tmp = REG_SET_FIELD(tmp, UVD_RBC_RB_CNTL, RB_RPTR_WR_EN, 1);
MMSCH_V2_0_INSERT_DIRECT_WT(
- SOC15_REG_OFFSET(UVD, i, mmUVD_RBC_RB_CNTL), tmp);
+ SOC15_REG_OFFSET(UVD, 0, mmUVD_RBC_RB_CNTL), tmp);

/* add end packet */
tmp = sizeof(struct mmsch_v2_0_cmd_end);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
index a8abc30918013..3a8df47aa5821 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
@@ -401,27 +401,25 @@ static int kfd_dbg_get_dev_watch_id(struct kfd_process_device *pdd, int *watch_i
return -ENOMEM;
}

-static void kfd_dbg_clear_dev_watch_id(struct kfd_process_device *pdd, int watch_id)
+static void kfd_dbg_clear_dev_watch_id(struct kfd_process_device *pdd, u32 watch_id)
{
spin_lock(&pdd->dev->watch_points_lock);

/* process owns device watch point so safe to clear */
- if ((pdd->alloc_watch_ids >> watch_id) & 0x1) {
- pdd->alloc_watch_ids &= ~(0x1 << watch_id);
- pdd->dev->alloc_watch_ids &= ~(0x1 << watch_id);
+ if (pdd->alloc_watch_ids & BIT(watch_id)) {
+ pdd->alloc_watch_ids &= ~BIT(watch_id);
+ pdd->dev->alloc_watch_ids &= ~BIT(watch_id);
}

spin_unlock(&pdd->dev->watch_points_lock);
}

-static bool kfd_dbg_owns_dev_watch_id(struct kfd_process_device *pdd, int watch_id)
+static bool kfd_dbg_owns_dev_watch_id(struct kfd_process_device *pdd, u32 watch_id)
{
bool owns_watch_id = false;

spin_lock(&pdd->dev->watch_points_lock);
- owns_watch_id = watch_id < MAX_WATCH_ADDRESSES &&
- ((pdd->alloc_watch_ids >> watch_id) & 0x1);
-
+ owns_watch_id = pdd->alloc_watch_ids & BIT(watch_id);
spin_unlock(&pdd->dev->watch_points_lock);

return owns_watch_id;
@@ -432,6 +430,9 @@ int kfd_dbg_trap_clear_dev_address_watch(struct kfd_process_device *pdd,
{
int r;

+ if (watch_id >= MAX_WATCH_ADDRESSES)
+ return -EINVAL;
+
if (!kfd_dbg_owns_dev_watch_id(pdd, watch_id))
return -EINVAL;

@@ -469,6 +470,9 @@ int kfd_dbg_trap_set_dev_address_watch(struct kfd_process_device *pdd,
if (r)
return r;

+ if (*watch_id >= MAX_WATCH_ADDRESSES)
+ return -EINVAL;
+
if (!pdd->dev->kfd->shared_resources.enable_mes) {
r = debug_lock_and_unmap(pdd->dev->dqm);
if (r) {
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_events.c b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
index 6798510c4a707..4e5ae56562838 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_events.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_events.c
@@ -331,6 +331,12 @@ static int kfd_event_page_set(struct kfd_process *p, void *kernel_address,
if (p->signal_page)
return -EBUSY;

+ if (size < KFD_SIGNAL_EVENT_LIMIT * 8) {
+ pr_err("Event page size %llu is too small, need at least %lu bytes\n",
+ size, (unsigned long)(KFD_SIGNAL_EVENT_LIMIT * 8));
+ return -EINVAL;
+ }
+
page = kzalloc(sizeof(*page), GFP_KERNEL);
if (!page)
return -ENOMEM;
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
index f31e9fbf634a0..52246f16cc242 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
@@ -62,7 +62,7 @@ svm_migrate_gart_map(struct amdgpu_ring *ring, uint64_t npages,
*gart_addr = adev->gmc.gart_start;

num_dw = ALIGN(adev->mman.buffer_funcs->copy_num_dw, 8);
- num_bytes = npages * 8;
+ num_bytes = npages * 8 * AMDGPU_GPU_PAGES_IN_CPU_PAGE;

r = amdgpu_job_alloc_with_ib(adev, &adev->mman.high_pr,
AMDGPU_FENCE_OWNER_UNDEFINED,
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
index 45923da7709fd..64f3a0687f8a2 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
@@ -1970,7 +1970,7 @@ static int signal_eviction_fence(struct kfd_process *p)
ef = dma_fence_get_rcu_safe(&p->ef);
rcu_read_unlock();
if (!ef)
- return -EINVAL;
+ return true;

ret = dma_fence_signal(ef);
dma_fence_put(ef);
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
index 0c6ef2919d870..1f1169a4d1540 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
@@ -275,8 +275,8 @@ int kfd_queue_acquire_buffers(struct kfd_process_device *pdd, struct queue_prope

/* EOP buffer is not required for all ASICs */
if (properties->eop_ring_buffer_address) {
- if (properties->eop_ring_buffer_size != topo_dev->node_props.eop_buffer_size) {
- pr_debug("queue eop bo size 0x%x not equal to node eop buf size 0x%x\n",
+ if (properties->eop_ring_buffer_size < topo_dev->node_props.eop_buffer_size) {
+ pr_debug("queue eop bo size 0x%x is less than node eop buf size 0x%x\n",
properties->eop_ring_buffer_size,
topo_dev->node_props.eop_buffer_size);
err = -EINVAL;
@@ -284,7 +284,7 @@ int kfd_queue_acquire_buffers(struct kfd_process_device *pdd, struct queue_prope
}
err = kfd_queue_buffer_get(vm, (void *)properties->eop_ring_buffer_address,
&properties->eop_buf_bo,
- properties->eop_ring_buffer_size);
+ ALIGN(properties->eop_ring_buffer_size, PAGE_SIZE));
if (err)
goto out_err_unreserve;
}
diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
index d65b0b23ec7b8..7f2dbb6c2cbf8 100644
--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
@@ -33,6 +33,7 @@
#include "amdgpu_hmm.h"
#include "amdgpu.h"
#include "amdgpu_xgmi.h"
+#include "amdgpu_reset.h"
#include "kfd_priv.h"
#include "kfd_svm.h"
#include "kfd_migrate.h"
@@ -2334,6 +2335,9 @@ static void svm_range_drain_retry_fault(struct svm_range_list *svms)

pr_debug("drain retry fault gpu %d svms %p\n", i, svms);

+ if (!down_read_trylock(&pdd->dev->adev->reset_domain->sem))
+ continue;
+
amdgpu_ih_wait_on_checkpoint_process_ts(pdd->dev->adev,
pdd->dev->adev->irq.retry_cam_enabled ?
&pdd->dev->adev->irq.ih :
@@ -2343,6 +2347,7 @@ static void svm_range_drain_retry_fault(struct svm_range_list *svms)
amdgpu_ih_wait_on_checkpoint_process_ts(pdd->dev->adev,
&pdd->dev->adev->irq.ih_soft);

+ up_read(&pdd->dev->adev->reset_domain->sem);

pr_debug("drain retry fault gpu %d svms 0x%p done\n", i, svms);
}
@@ -2526,7 +2531,7 @@ svm_range_unmap_from_cpu(struct mm_struct *mm, struct svm_range *prange,
adev = pdd->dev->adev;

/* Check and drain ih1 ring if cam not available */
- if (adev->irq.ih1.ring_size) {
+ if (!adev->irq.retry_cam_enabled && adev->irq.ih1.ring_size) {
ih = &adev->irq.ih1;
checkpoint_wptr = amdgpu_ih_get_wptr(adev, ih);
if (ih->rptr != checkpoint_wptr) {
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
index b146b0529ae7f..4d508129a5e65 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
@@ -3243,7 +3243,17 @@ static int dm_resume(void *handle)
struct dc_commit_streams_params commit_params = {};

if (dm->dc->caps.ips_support) {
+ if (!amdgpu_in_reset(adev))
+ mutex_lock(&dm->dc_lock);
+
+ /* Need to set POWER_STATE_D0 first or it will not execute
+ * idle_power_optimizations command to DMUB.
+ */
+ dc_dmub_srv_set_power_state(dm->dc->ctx->dmub_srv, DC_ACPI_CM_POWER_STATE_D0);
dc_dmub_srv_apply_idle_power_optimizations(dm->dc, false);
+
+ if (!amdgpu_in_reset(adev))
+ mutex_unlock(&dm->dc_lock);
}

if (amdgpu_in_reset(adev)) {
@@ -9888,10 +9898,10 @@ static void dm_set_writeback(struct amdgpu_display_manager *dm,

wb_info->dwb_params.capture_rate = dwb_capture_rate_0;

- wb_info->dwb_params.scaler_taps.h_taps = 4;
- wb_info->dwb_params.scaler_taps.v_taps = 4;
- wb_info->dwb_params.scaler_taps.h_taps_c = 2;
- wb_info->dwb_params.scaler_taps.v_taps_c = 2;
+ wb_info->dwb_params.scaler_taps.h_taps = 1;
+ wb_info->dwb_params.scaler_taps.v_taps = 1;
+ wb_info->dwb_params.scaler_taps.h_taps_c = 1;
+ wb_info->dwb_params.scaler_taps.v_taps_c = 1;
wb_info->dwb_params.subsample_position = DWB_INTERSTITIAL_SUBSAMPLING;

wb_info->mcif_buf_params.luma_pitch = afb->base.pitches[0];
@@ -10185,7 +10195,7 @@ static void amdgpu_dm_atomic_commit_tail(struct drm_atomic_state *state)
continue;
}
for (j = 0; j < status->plane_count; j++)
- dummy_updates[j].surface = status->plane_states[0];
+ dummy_updates[j].surface = status->plane_states[j];

sort(dummy_updates, status->plane_count,
sizeof(*dummy_updates), dm_plane_layer_index_cmp, NULL);
@@ -10884,6 +10894,8 @@ static bool should_reset_plane(struct drm_atomic_state *state,
struct drm_crtc_state *old_crtc_state, *new_crtc_state;
struct dm_crtc_state *old_dm_crtc_state, *new_dm_crtc_state;
struct amdgpu_device *adev = drm_to_adev(plane->dev);
+ struct drm_connector_state *new_con_state;
+ struct drm_connector *connector;
int i;

/*
@@ -10894,6 +10906,15 @@ static bool should_reset_plane(struct drm_atomic_state *state,
state->allow_modeset)
return true;

+ /* Check for writeback commit */
+ for_each_new_connector_in_state(state, connector, new_con_state, i) {
+ if (connector->connector_type != DRM_MODE_CONNECTOR_WRITEBACK)
+ continue;
+
+ if (new_con_state->writeback_job)
+ return true;
+ }
+
if (amdgpu_in_reset(adev) && state->allow_modeset)
return true;

@@ -11070,7 +11091,7 @@ static int dm_check_cursor_fb(struct amdgpu_crtc *new_acrtc,
* check tiling flags when the FB doesn't have a modifier.
*/
if (!(fb->flags & DRM_MODE_FB_MODIFIERS)) {
- if (adev->family >= AMDGPU_FAMILY_GC_12_0_0) {
+ if (adev->family == AMDGPU_FAMILY_GC_12_0_0) {
linear = AMDGPU_TILING_GET(afb->tiling_flags, GFX12_SWIZZLE_MODE) == 0;
} else if (adev->family >= AMDGPU_FAMILY_AI) {
linear = AMDGPU_TILING_GET(afb->tiling_flags, SWIZZLE_MODE) == 0;
@@ -11492,10 +11513,9 @@ static int dm_crtc_get_cursor_mode(struct amdgpu_device *adev,

/* Overlay cursor not supported on HW before DCN
* DCN401 does not have the cursor-on-scaled-plane or cursor-on-yuv-plane restrictions
- * as previous DCN generations, so enable native mode on DCN401 in addition to DCE
+ * as previous DCN generations, so enable native mode on DCN401
*/
- if (amdgpu_ip_version(adev, DCE_HWIP, 0) == 0 ||
- amdgpu_ip_version(adev, DCE_HWIP, 0) == IP_VERSION(4, 0, 1)) {
+ if (amdgpu_ip_version(adev, DCE_HWIP, 0) == IP_VERSION(4, 0, 1)) {
*cursor_mode = DM_CURSOR_NATIVE_MODE;
return 0;
}
@@ -11815,6 +11835,12 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
* need to be added for DC to not disable a plane by mistake
*/
if (dm_new_crtc_state->cursor_mode == DM_CURSOR_OVERLAY_MODE) {
+ if (amdgpu_ip_version(adev, DCE_HWIP, 0) == 0) {
+ drm_dbg(dev, "Overlay cursor not supported on DCE\n");
+ ret = -EINVAL;
+ goto fail;
+ }
+
ret = drm_atomic_add_affected_planes(state, crtc);
if (ret)
goto fail;
diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
index 62e30942f735d..1e993f5d734a6 100644
--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
@@ -275,7 +275,7 @@ static int amdgpu_dm_plane_validate_dcc(struct amdgpu_device *adev,
if (!dcc->enable)
return 0;

- if (adev->family < AMDGPU_FAMILY_GC_12_0_0 &&
+ if (adev->family != AMDGPU_FAMILY_GC_12_0_0 &&
format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN)
return -EINVAL;

@@ -896,7 +896,7 @@ int amdgpu_dm_plane_fill_plane_buffer_attributes(struct amdgpu_device *adev,
upper_32_bits(chroma_addr);
}

- if (adev->family >= AMDGPU_FAMILY_GC_12_0_0) {
+ if (adev->family == AMDGPU_FAMILY_GC_12_0_0) {
ret = amdgpu_dm_plane_fill_gfx12_plane_attributes_from_modifiers(adev, afb, format,
rotation, plane_size,
tiling_info, dcc,
@@ -1055,10 +1055,15 @@ static void amdgpu_dm_plane_get_min_max_dc_plane_scaling(struct drm_device *dev,
*min_downscale = plane_cap->max_downscale_factor.nv12;
break;

+ /* All 64 bpp formats have the same fp16 scaling limits */
case DRM_FORMAT_XRGB16161616F:
case DRM_FORMAT_ARGB16161616F:
case DRM_FORMAT_XBGR16161616F:
case DRM_FORMAT_ABGR16161616F:
+ case DRM_FORMAT_XRGB16161616:
+ case DRM_FORMAT_ARGB16161616:
+ case DRM_FORMAT_XBGR16161616:
+ case DRM_FORMAT_ABGR16161616:
*max_upscale = plane_cap->max_upscale_factor.fp16;
*min_downscale = plane_cap->max_downscale_factor.fp16;
break;
diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
index 6e9d6090b10ff..33ee7e81b1b93 100644
--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
@@ -630,32 +630,32 @@ static struct wm_table ddr5_wm_table = {
.wm_inst = WM_A,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 11.72,
- .sr_exit_time_us = 28.0,
- .sr_enter_plus_exit_time_us = 30.0,
+ .sr_exit_time_us = 31.0,
+ .sr_enter_plus_exit_time_us = 33.0,
.valid = true,
},
{
.wm_inst = WM_B,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 11.72,
- .sr_exit_time_us = 28.0,
- .sr_enter_plus_exit_time_us = 30.0,
+ .sr_exit_time_us = 31.0,
+ .sr_enter_plus_exit_time_us = 33.0,
.valid = true,
},
{
.wm_inst = WM_C,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 11.72,
- .sr_exit_time_us = 28.0,
- .sr_enter_plus_exit_time_us = 30.0,
+ .sr_exit_time_us = 31.0,
+ .sr_enter_plus_exit_time_us = 33.0,
.valid = true,
},
{
.wm_inst = WM_D,
.wm_type = WM_TYPE_PSTATE_CHG,
.pstate_latency_us = 11.72,
- .sr_exit_time_us = 28.0,
- .sr_enter_plus_exit_time_us = 30.0,
+ .sr_exit_time_us = 31.0,
+ .sr_enter_plus_exit_time_us = 33.0,
.valid = true,
},
}
diff --git a/drivers/gpu/drm/amd/display/dc/dio/dcn32/dcn32_dio_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dio/dcn32/dcn32_dio_link_encoder.c
index 06907e8a4eda1..ddc736af776c9 100644
--- a/drivers/gpu/drm/amd/display/dc/dio/dcn32/dcn32_dio_link_encoder.c
+++ b/drivers/gpu/drm/amd/display/dc/dio/dcn32/dcn32_dio_link_encoder.c
@@ -188,9 +188,18 @@ void dcn32_link_encoder_get_max_link_cap(struct link_encoder *enc,
if (!query_dp_alt_from_dmub(enc, &cmd))
return;

- if (cmd.query_dp_alt.data.is_usb &&
- cmd.query_dp_alt.data.is_dp4 == 0)
- link_settings->lane_count = MIN(LANE_COUNT_TWO, link_settings->lane_count);
+ /*
+ * USB-C DisplayPort Alt Mode lane count limitation logic:
+ * When USB and DP share the same USB-C connector, hardware must allocate
+ * some lanes for USB data, limiting DP to maximum 2 lanes instead of 4.
+ * This ensures USB functionality remains available while DP is active.
+ */
+ if (cmd.query_dp_alt.data.is_dp_alt_disable == 0 &&
+ cmd.query_dp_alt.data.is_usb &&
+ cmd.query_dp_alt.data.is_dp4 == 0) {
+ link_settings->lane_count =
+ MIN(LANE_COUNT_TWO, link_settings->lane_count);
+ }
}


diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
index c90dee4e9116a..c15bd05def93f 100644
--- a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
+++ b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
@@ -164,8 +164,8 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_5_soc = {
},
},
.num_states = 5,
- .sr_exit_time_us = 28.0,
- .sr_enter_plus_exit_time_us = 30.0,
+ .sr_exit_time_us = 31.0,
+ .sr_enter_plus_exit_time_us = 33.0,
.sr_exit_z8_time_us = 250.0,
.sr_enter_plus_exit_z8_time_us = 350.0,
.fclk_change_latency_us = 24.0,
diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
index 6c3cae593ad54..ba12c294f971a 100644
--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
+++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.c
@@ -360,10 +360,10 @@ void dpp3_cnv_setup (

tbl_entry.color_space = input_color_space;

- if (color_space >= COLOR_SPACE_YCBCR601)
- select = INPUT_CSC_SELECT_ICSC;
- else
+ if (dpp3_should_bypass_post_csc_for_colorspace(color_space))
select = INPUT_CSC_SELECT_BYPASS;
+ else
+ select = INPUT_CSC_SELECT_ICSC;

dpp3_program_post_csc(dpp_base, color_space, select,
&tbl_entry);
@@ -1527,3 +1527,18 @@ bool dpp3_construct(
return true;
}

+bool dpp3_should_bypass_post_csc_for_colorspace(enum dc_color_space dc_color_space)
+{
+ switch (dc_color_space) {
+ case COLOR_SPACE_UNKNOWN:
+ case COLOR_SPACE_SRGB:
+ case COLOR_SPACE_XR_RGB:
+ case COLOR_SPACE_SRGB_LIMITED:
+ case COLOR_SPACE_MSREF_SCRGB:
+ case COLOR_SPACE_2020_RGB_FULLRANGE:
+ case COLOR_SPACE_2020_RGB_LIMITEDRANGE:
+ return true;
+ default:
+ return false;
+ }
+}
diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.h b/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.h
index b110f35ef66bd..d183875b13302 100644
--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.h
+++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn30/dcn30_dpp.h
@@ -641,4 +641,8 @@ void dpp3_program_cm_dealpha(

void dpp3_cm_get_gamut_remap(struct dpp *dpp_base,
struct dpp_grph_csc_adjustment *adjust);
+
+bool dpp3_should_bypass_post_csc_for_colorspace(
+ enum dc_color_space dc_color_space);
+
#endif /* __DC_HWSS_DCN30_H__ */
diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp.c
index 97bf26fa35738..3e24ac07eebd9 100644
--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp.c
+++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp.c
@@ -206,10 +206,10 @@ void dpp401_dpp_setup(

tbl_entry.color_space = input_color_space;

- if (color_space >= COLOR_SPACE_YCBCR601)
- select = INPUT_CSC_SELECT_ICSC;
- else
+ if (dpp3_should_bypass_post_csc_for_colorspace(color_space))
select = INPUT_CSC_SELECT_BYPASS;
+ else
+ select = INPUT_CSC_SELECT_ICSC;

dpp3_program_post_csc(dpp_base, color_space, select,
&tbl_entry);
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
index 31c7dfff27cbb..df69e0cebf785 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dce110/dce110_hwseq.c
@@ -58,6 +58,7 @@
#include "dc_state_priv.h"
#include "dpcd_defs.h"
#include "dsc.h"
+#include "dc_dp_types.h"
/* include DCE11 register header files */
#include "dce/dce_11_0_d.h"
#include "dce/dce_11_0_sh_mask.h"
@@ -1706,20 +1707,25 @@ static void power_down_encoders(struct dc *dc)
int i;

for (i = 0; i < dc->link_count; i++) {
- enum signal_type signal = dc->links[i]->connector_signal;
-
- dc->link_srv->blank_dp_stream(dc->links[i], false);
+ struct dc_link *link = dc->links[i];
+ struct link_encoder *link_enc = link->link_enc;
+ enum signal_type signal = link->connector_signal;

+ dc->link_srv->blank_dp_stream(link, false);
if (signal != SIGNAL_TYPE_EDP)
signal = SIGNAL_TYPE_NONE;

- if (dc->links[i]->ep_type == DISPLAY_ENDPOINT_PHY)
- dc->links[i]->link_enc->funcs->disable_output(
- dc->links[i]->link_enc, signal);
+ if (link->ep_type == DISPLAY_ENDPOINT_PHY)
+ link_enc->funcs->disable_output(link_enc, signal);
+
+ if (link->fec_state == dc_link_fec_enabled) {
+ link_enc->funcs->fec_set_enable(link_enc, false);
+ link_enc->funcs->fec_set_ready(link_enc, false);
+ link->fec_state = dc_link_fec_not_ready;
+ }

- dc->links[i]->link_status.link_active = false;
- memset(&dc->links[i]->cur_link_settings, 0,
- sizeof(dc->links[i]->cur_link_settings));
+ link->link_status.link_active = false;
+ memset(&link->cur_link_settings, 0, sizeof(link->cur_link_settings));
}
}

@@ -1767,6 +1773,9 @@ static void disable_vga_and_power_gate_all_controllers(
struct timing_generator *tg;
struct dc_context *ctx = dc->ctx;

+ if (dc->caps.ips_support)
+ return;
+
for (i = 0; i < dc->res_pool->timing_generator_count; i++) {
tg = dc->res_pool->timing_generators[i];

@@ -1843,13 +1852,16 @@ static void clean_up_dsc_blocks(struct dc *dc)
/* disable DSC in OPTC */
if (i < dc->res_pool->timing_generator_count) {
tg = dc->res_pool->timing_generators[i];
- tg->funcs->set_dsc_config(tg, OPTC_DSC_DISABLED, 0, 0);
+ if (tg->funcs->set_dsc_config)
+ tg->funcs->set_dsc_config(tg, OPTC_DSC_DISABLED, 0, 0);
}
/* disable DSC in stream encoder */
if (i < dc->res_pool->stream_enc_count) {
se = dc->res_pool->stream_enc[i];
- se->funcs->dp_set_dsc_config(se, OPTC_DSC_DISABLED, 0, 0);
- se->funcs->dp_set_dsc_pps_info_packet(se, false, NULL, true);
+ if (se->funcs->dp_set_dsc_config)
+ se->funcs->dp_set_dsc_config(se, OPTC_DSC_DISABLED, 0, 0);
+ if (se->funcs->dp_set_dsc_pps_info_packet)
+ se->funcs->dp_set_dsc_pps_info_packet(se, false, NULL, true);
}
/* disable DSC block */
if (dccg->funcs->set_ref_dscclk)
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
index b94fe14f4b935..a113085385d15 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
@@ -3012,9 +3012,17 @@ void dcn20_enable_stream(struct pipe_ctx *pipe_ctx)
dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
}
} else {
- if (dccg->funcs->enable_symclk_se)
- dccg->funcs->enable_symclk_se(dccg, stream_enc->stream_enc_inst,
+ if (dccg->funcs->enable_symclk_se && link_enc) {
+ if (link->ep_type == DISPLAY_ENDPOINT_USB4_DPIA
+ && link->cur_link_settings.link_rate == LINK_RATE_UNKNOWN
+ && !link->link_status.link_active) {
+ if (dccg->funcs->disable_symclk_se)
+ dccg->funcs->disable_symclk_se(dccg, stream_enc->stream_enc_inst,
link_enc->transmitter - TRANSMITTER_UNIPHY_A);
+ } else
+ dccg->funcs->enable_symclk_se(dccg, stream_enc->stream_enc_inst,
+ link_enc->transmitter - TRANSMITTER_UNIPHY_A);
+ }
}

if (dc->res_pool->dccg->funcs->set_pixel_rate_div)
diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
index f805803be55e4..e62d415b16004 100644
--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
@@ -371,6 +371,8 @@ void dcn401_init_hw(struct dc *dc)
for (i = 0; i < dc->link_count; i++) {
struct dc_link *link = dc->links[i];

+ if (link->ep_type != DISPLAY_ENDPOINT_PHY)
+ continue;
if (link->link_enc->funcs->is_dig_enabled &&
link->link_enc->funcs->is_dig_enabled(link->link_enc) &&
hws->funcs.power_down) {
@@ -965,10 +967,10 @@ static void dcn401_enable_stream_calc(
pipe_ctx->stream->link->cur_link_settings.lane_count;
uint32_t active_total_with_borders;

- if (dc->link_srv->dp_is_128b_132b_signal(pipe_ctx))
+ if (dc->link_srv->dp_is_128b_132b_signal(pipe_ctx)) {
*dp_hpo_inst = pipe_ctx->stream_res.hpo_dp_stream_enc->inst;
-
- *phyd32clk = get_phyd32clk_src(pipe_ctx->stream->link);
+ *phyd32clk = get_phyd32clk_src(pipe_ctx->stream->link);
+ }

if (dc_is_tmds_signal(pipe_ctx->stream->signal))
dcn401_calculate_dccg_tmds_div_value(pipe_ctx, tmds_div);
diff --git a/drivers/gpu/drm/amd/display/dc/mpc/dcn32/dcn32_mpc.c b/drivers/gpu/drm/amd/display/dc/mpc/dcn32/dcn32_mpc.c
index a0e9e9f0441a4..acf726137488d 100644
--- a/drivers/gpu/drm/amd/display/dc/mpc/dcn32/dcn32_mpc.c
+++ b/drivers/gpu/drm/amd/display/dc/mpc/dcn32/dcn32_mpc.c
@@ -721,8 +721,7 @@ bool mpc32_program_shaper(
return false;
}

- if (mpc->ctx->dc->debug.enable_mem_low_power.bits.mpc)
- mpc32_power_on_shaper_3dlut(mpc, mpcc_id, true);
+ mpc32_power_on_shaper_3dlut(mpc, mpcc_id, true);

current_mode = mpc32_get_shaper_current(mpc, mpcc_id);

diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
index 9cb72805b8d1a..ad3f229f39cba 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
@@ -1227,12 +1227,12 @@ static struct stream_encoder *dcn315_stream_encoder_create(
/*PHYB is wired off in HW, allow front end to remapping, otherwise needs more changes*/

/* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
- if (eng_id <= ENGINE_ID_DIGF) {
- vpg_inst = eng_id;
- afmt_inst = eng_id;
- } else
+ if (eng_id < 0 || eng_id >= ARRAY_SIZE(stream_enc_regs))
return NULL;

+ vpg_inst = eng_id;
+ afmt_inst = eng_id;
+
enc1 = kzalloc(sizeof(struct dcn10_stream_encoder), GFP_KERNEL);
vpg = dcn31_vpg_create(ctx, vpg_inst);
afmt = dcn31_afmt_create(ctx, afmt_inst);
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
index af82e13029c9e..b2d8383411b21 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
@@ -1221,12 +1221,12 @@ static struct stream_encoder *dcn316_stream_encoder_create(
int afmt_inst;

/* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
- if (eng_id <= ENGINE_ID_DIGF) {
- vpg_inst = eng_id;
- afmt_inst = eng_id;
- } else
+ if (eng_id < 0 || eng_id >= ARRAY_SIZE(stream_enc_regs))
return NULL;

+ vpg_inst = eng_id;
+ afmt_inst = eng_id;
+
enc1 = kzalloc(sizeof(struct dcn10_stream_encoder), GFP_KERNEL);
vpg = dcn31_vpg_create(ctx, vpg_inst);
afmt = dcn31_afmt_create(ctx, afmt_inst);
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
index 6b889c8be0ca3..62dbb0f3073c0 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
@@ -1207,12 +1207,12 @@ static struct stream_encoder *dcn32_stream_encoder_create(
int afmt_inst;

/* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
- if (eng_id <= ENGINE_ID_DIGF) {
- vpg_inst = eng_id;
- afmt_inst = eng_id;
- } else
+ if (eng_id < 0 || eng_id >= ARRAY_SIZE(stream_enc_regs))
return NULL;

+ vpg_inst = eng_id;
+ afmt_inst = eng_id;
+
enc1 = kzalloc(sizeof(struct dcn10_stream_encoder), GFP_KERNEL);
vpg = dcn32_vpg_create(ctx, vpg_inst);
afmt = dcn32_afmt_create(ctx, afmt_inst);
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
index 74113c578bac4..84aaa7a5ae30f 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
@@ -1190,12 +1190,12 @@ static struct stream_encoder *dcn321_stream_encoder_create(
int afmt_inst;

/* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
- if (eng_id <= ENGINE_ID_DIGF) {
- vpg_inst = eng_id;
- afmt_inst = eng_id;
- } else
+ if (eng_id < 0 || eng_id >= ARRAY_SIZE(stream_enc_regs))
return NULL;

+ vpg_inst = eng_id;
+ afmt_inst = eng_id;
+
enc1 = kzalloc(sizeof(struct dcn10_stream_encoder), GFP_KERNEL);
vpg = dcn321_vpg_create(ctx, vpg_inst);
afmt = dcn321_afmt_create(ctx, afmt_inst);
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
index 92d5e3252d2d4..90d63e3174344 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
@@ -1272,12 +1272,12 @@ static struct stream_encoder *dcn35_stream_encoder_create(
int afmt_inst;

/* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
- if (eng_id <= ENGINE_ID_DIGF) {
- vpg_inst = eng_id;
- afmt_inst = eng_id;
- } else
+ if (eng_id < 0 || eng_id >= ARRAY_SIZE(stream_enc_regs))
return NULL;

+ vpg_inst = eng_id;
+ afmt_inst = eng_id;
+
enc1 = kzalloc(sizeof(struct dcn10_stream_encoder), GFP_KERNEL);
vpg = dcn31_vpg_create(ctx, vpg_inst);
afmt = dcn31_afmt_create(ctx, afmt_inst);
diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
index 6e27226cec289..9c72e87b606a2 100644
--- a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
@@ -1252,12 +1252,12 @@ static struct stream_encoder *dcn35_stream_encoder_create(
int afmt_inst;

/* Mapping of VPG, AFMT, DME register blocks to DIO block instance */
- if (eng_id <= ENGINE_ID_DIGF) {
- vpg_inst = eng_id;
- afmt_inst = eng_id;
- } else
+ if (eng_id < 0 || eng_id >= ARRAY_SIZE(stream_enc_regs))
return NULL;

+ vpg_inst = eng_id;
+ afmt_inst = eng_id;
+
enc1 = kzalloc(sizeof(struct dcn10_stream_encoder), GFP_KERNEL);
vpg = dcn31_vpg_create(ctx, vpg_inst);
afmt = dcn31_afmt_create(ctx, afmt_inst);
diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
index 29cecfab07042..37a91a76208e5 100644
--- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
@@ -3449,6 +3449,11 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
max_sclk = 60000;
max_mclk = 80000;
}
+ if ((adev->pdev->device == 0x666f) &&
+ (adev->pdev->revision == 0x00)) {
+ max_sclk = 80000;
+ max_mclk = 95000;
+ }
} else if (adev->asic_type == CHIP_OLAND) {
if ((adev->pdev->revision == 0xC7) ||
(adev->pdev->revision == 0x80) ||
diff --git a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
index 3787db014501e..ae8d7b017968d 100644
--- a/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
+++ b/drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
@@ -78,8 +78,6 @@ drm_plane_state_to_atmel_hlcdc_plane_state(struct drm_plane_state *s)
return container_of(s, struct atmel_hlcdc_plane_state, base);
}

-#define SUBPIXEL_MASK 0xffff
-
static uint32_t rgb_formats[] = {
DRM_FORMAT_C8,
DRM_FORMAT_XRGB4444,
@@ -744,24 +742,15 @@ static int atmel_hlcdc_plane_atomic_check(struct drm_plane *p,
if (ret || !s->visible)
return ret;

- hstate->src_x = s->src.x1;
- hstate->src_y = s->src.y1;
- hstate->src_w = drm_rect_width(&s->src);
- hstate->src_h = drm_rect_height(&s->src);
+ hstate->src_x = s->src.x1 >> 16;
+ hstate->src_y = s->src.y1 >> 16;
+ hstate->src_w = drm_rect_width(&s->src) >> 16;
+ hstate->src_h = drm_rect_height(&s->src) >> 16;
hstate->crtc_x = s->dst.x1;
hstate->crtc_y = s->dst.y1;
hstate->crtc_w = drm_rect_width(&s->dst);
hstate->crtc_h = drm_rect_height(&s->dst);

- if ((hstate->src_x | hstate->src_y | hstate->src_w | hstate->src_h) &
- SUBPIXEL_MASK)
- return -EINVAL;
-
- hstate->src_x >>= 16;
- hstate->src_y >>= 16;
- hstate->src_w >>= 16;
- hstate->src_h >>= 16;
-
hstate->nplanes = fb->format->num_planes;
if (hstate->nplanes > ATMEL_HLCDC_LAYER_MAX_PLANES)
return -EINVAL;
@@ -1185,8 +1174,7 @@ atmel_hlcdc_plane_atomic_duplicate_state(struct drm_plane *p)
return NULL;
}

- if (copy->base.fb)
- drm_framebuffer_get(copy->base.fb);
+ __drm_atomic_helper_plane_duplicate_state(p, &copy->base);

return &copy->base;
}
@@ -1204,8 +1192,7 @@ static void atmel_hlcdc_plane_atomic_destroy_state(struct drm_plane *p,
state->dscrs[i]->self);
}

- if (s->fb)
- drm_framebuffer_put(s->fb);
+ __drm_atomic_helper_plane_destroy_state(s);

kfree(state);
}
diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
index 7244c3abb7f99..3273c368b8357 100644
--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
+++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
@@ -1801,7 +1801,7 @@ static const struct drm_edid *anx7625_edid_read(struct anx7625_data *ctx)
return NULL;
}

- ctx->cached_drm_edid = drm_edid_alloc(edid_buf, FOUR_BLOCK_SIZE);
+ ctx->cached_drm_edid = drm_edid_alloc(edid_buf, edid_num * ONE_BLOCK_SIZE);
kfree(edid_buf);

out:
diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
index 3e5f721d75400..997f2489f00f3 100644
--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
+++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
@@ -4571,7 +4571,8 @@ int drm_dp_atomic_release_time_slots(struct drm_atomic_state *state,
if (!payload->delete) {
payload->pbn = 0;
payload->delete = true;
- topology_state->payload_mask &= ~BIT(payload->vcpi - 1);
+ if (payload->vcpi > 0)
+ topology_state->payload_mask &= ~BIT(payload->vcpi - 1);
}

return 0;
diff --git a/drivers/gpu/drm/drm_buddy.c b/drivers/gpu/drm/drm_buddy.c
index 7debf079c943f..adfcdd5645d9a 100644
--- a/drivers/gpu/drm/drm_buddy.c
+++ b/drivers/gpu/drm/drm_buddy.c
@@ -414,6 +414,7 @@ void drm_buddy_fini(struct drm_buddy *mm)

for_each_free_tree(i)
kfree(mm->free_trees[i]);
+ kfree(mm->free_trees);
kfree(mm->roots);
}
EXPORT_SYMBOL(drm_buddy_fini);
@@ -1149,6 +1150,15 @@ int drm_buddy_alloc_blocks(struct drm_buddy *mm,
order = fls(pages) - 1;
min_order = ilog2(min_block_size) - ilog2(mm->chunk_size);

+ if (order > mm->max_order || size > mm->size) {
+ if ((flags & DRM_BUDDY_CONTIGUOUS_ALLOCATION) &&
+ !(flags & DRM_BUDDY_RANGE_ALLOCATION))
+ return __alloc_contig_try_harder(mm, original_size,
+ original_min_size, blocks);
+
+ return -EINVAL;
+ }
+
do {
order = min(order, (unsigned int)fls(pages) - 1);
BUG_ON(order > mm->max_order);
diff --git a/drivers/gpu/drm/drm_property.c b/drivers/gpu/drm/drm_property.c
index 596272149a359..3c88b5fbdf28c 100644
--- a/drivers/gpu/drm/drm_property.c
+++ b/drivers/gpu/drm/drm_property.c
@@ -562,7 +562,7 @@ drm_property_create_blob(struct drm_device *dev, size_t length,
if (!length || length > INT_MAX - sizeof(struct drm_property_blob))
return ERR_PTR(-EINVAL);

- blob = kvzalloc(sizeof(struct drm_property_blob)+length, GFP_KERNEL);
+ blob = kvzalloc(sizeof(struct drm_property_blob) + length, GFP_KERNEL_ACCOUNT);
if (!blob)
return ERR_PTR(-ENOMEM);

diff --git a/drivers/gpu/drm/i915/display/intel_acpi.c b/drivers/gpu/drm/i915/display/intel_acpi.c
index c3b29a331d725..859fedb9d93ea 100644
--- a/drivers/gpu/drm/i915/display/intel_acpi.c
+++ b/drivers/gpu/drm/i915/display/intel_acpi.c
@@ -93,6 +93,7 @@ static void intel_dsm_platform_mux_info(acpi_handle dhandle)

if (!pkg->package.count) {
DRM_DEBUG_DRIVER("no connection in _DSM\n");
+ ACPI_FREE(pkg);
return;
}

diff --git a/drivers/gpu/drm/i915/intel_wakeref.c b/drivers/gpu/drm/i915/intel_wakeref.c
index dea2f63184f89..40fbf2ccaee21 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.c
+++ b/drivers/gpu/drm/i915/intel_wakeref.c
@@ -78,7 +78,7 @@ void __intel_wakeref_put_last(struct intel_wakeref *wf, unsigned long flags)
/* Assume we are not in process context and so cannot sleep. */
if (flags & INTEL_WAKEREF_PUT_ASYNC || !mutex_trylock(&wf->mutex)) {
mod_delayed_work(wf->i915->unordered_wq, &wf->work,
- FIELD_GET(INTEL_WAKEREF_PUT_DELAY, flags));
+ FIELD_GET(INTEL_WAKEREF_PUT_DELAY_MASK, flags));
return;
}

diff --git a/drivers/gpu/drm/i915/intel_wakeref.h b/drivers/gpu/drm/i915/intel_wakeref.h
index 68aa3be482515..ca4c4a91a49fd 100644
--- a/drivers/gpu/drm/i915/intel_wakeref.h
+++ b/drivers/gpu/drm/i915/intel_wakeref.h
@@ -131,17 +131,16 @@ intel_wakeref_get_if_active(struct intel_wakeref *wf)
return atomic_inc_not_zero(&wf->count);
}

-enum {
- INTEL_WAKEREF_PUT_ASYNC_BIT = 0,
- __INTEL_WAKEREF_PUT_LAST_BIT__
-};
-
static inline void
intel_wakeref_might_get(struct intel_wakeref *wf)
{
might_lock(&wf->mutex);
}

+/* flags for __intel_wakeref_put() and __intel_wakeref_put_last */
+#define INTEL_WAKEREF_PUT_ASYNC BIT(0)
+#define INTEL_WAKEREF_PUT_DELAY_MASK GENMASK(BITS_PER_LONG - 1, 1)
+
/**
* __intel_wakeref_put: Release the wakeref
* @wf: the wakeref
@@ -157,9 +156,6 @@ intel_wakeref_might_get(struct intel_wakeref *wf)
*/
static inline void
__intel_wakeref_put(struct intel_wakeref *wf, unsigned long flags)
-#define INTEL_WAKEREF_PUT_ASYNC BIT(INTEL_WAKEREF_PUT_ASYNC_BIT)
-#define INTEL_WAKEREF_PUT_DELAY \
- GENMASK(BITS_PER_LONG - 1, __INTEL_WAKEREF_PUT_LAST_BIT__)
{
INTEL_WAKEREF_BUG_ON(atomic_read(&wf->count) <= 0);
if (unlikely(!atomic_add_unless(&wf->count, -1, 1)))
@@ -184,7 +180,7 @@ intel_wakeref_put_delay(struct intel_wakeref *wf, unsigned long delay)
{
__intel_wakeref_put(wf,
INTEL_WAKEREF_PUT_ASYNC |
- FIELD_PREP(INTEL_WAKEREF_PUT_DELAY, delay));
+ FIELD_PREP(INTEL_WAKEREF_PUT_DELAY_MASK, delay));
}

static inline void
diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
index 2e25af3462ab6..a0ca41f4818a3 100644
--- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
+++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c
@@ -77,7 +77,10 @@ static bool a2xx_me_init(struct msm_gpu *gpu)

/* Vertex and Pixel Shader Start Addresses in instructions
* (3 DWORDS per instruction) */
- OUT_RING(ring, 0x80000180);
+ if (adreno_is_a225(adreno_gpu))
+ OUT_RING(ring, 0x80000300);
+ else
+ OUT_RING(ring, 0x80000180);
/* Maximum Contexts */
OUT_RING(ring, 0x00000001);
/* Write Confirm Interval and The CP will wait the
diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
index 2f153e0b5c6a9..d53ab3d886262 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_2_sc7280.h
@@ -13,6 +13,7 @@ static const struct dpu_caps sc7280_dpu_caps = {
.has_dim_layer = true,
.has_idle_pc = true,
.max_linewidth = 2400,
+ .has_3d_merge = true,
.pixel_ram_size = DEFAULT_PIXEL_RAM_SIZE,
};

@@ -142,18 +143,25 @@ static const struct dpu_pingpong_cfg sc7280_pp[] = {
.base = 0x6b000, .len = 0,
.features = BIT(DPU_PINGPONG_DITHER),
.sblk = &sc7280_pp_sblk,
- .merge_3d = 0,
+ .merge_3d = MERGE_3D_1,
.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 10),
}, {
.name = "pingpong_3", .id = PINGPONG_3,
.base = 0x6c000, .len = 0,
.features = BIT(DPU_PINGPONG_DITHER),
.sblk = &sc7280_pp_sblk,
- .merge_3d = 0,
+ .merge_3d = MERGE_3D_1,
.intr_done = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 11),
},
};

+static const struct dpu_merge_3d_cfg sc7280_merge_3d[] = {
+ {
+ .name = "merge_3d_1", .id = MERGE_3D_1,
+ .base = 0x4f000, .len = 0x8,
+ },
+};
+
/* NOTE: sc7280 only has one DSC hard slice encoder */
static const struct dpu_dsc_cfg sc7280_dsc[] = {
{
@@ -259,6 +267,8 @@ const struct dpu_mdss_cfg dpu_sc7280_cfg = {
.mixer = sc7280_lm,
.pingpong_count = ARRAY_SIZE(sc7280_pp),
.pingpong = sc7280_pp,
+ .merge_3d_count = ARRAY_SIZE(sc7280_merge_3d),
+ .merge_3d = sc7280_merge_3d,
.dsc_count = ARRAY_SIZE(sc7280_dsc),
.dsc = sc7280_dsc,
.wb_count = ARRAY_SIZE(sc7280_wb),
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index 47b514c89ce66..170d60bb7602d 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -768,24 +768,24 @@ static void _dpu_encoder_update_vsync_source(struct dpu_encoder_virt *dpu_enc,
return;
}

+ vsync_cfg.vsync_source = disp_info->vsync_source;
+ vsync_cfg.frame_rate = drm_mode_vrefresh(&dpu_enc->base.crtc->state->adjusted_mode);
+
if (hw_mdptop->ops.setup_vsync_source) {
for (i = 0; i < dpu_enc->num_phys_encs; i++)
vsync_cfg.ppnumber[i] = dpu_enc->hw_pp[i]->idx;

vsync_cfg.pp_count = dpu_enc->num_phys_encs;
- vsync_cfg.frame_rate = drm_mode_vrefresh(&dpu_enc->base.crtc->state->adjusted_mode);
-
- vsync_cfg.vsync_source = disp_info->vsync_source;

hw_mdptop->ops.setup_vsync_source(hw_mdptop, &vsync_cfg);
+ }

- for (i = 0; i < dpu_enc->num_phys_encs; i++) {
- phys_enc = dpu_enc->phys_encs[i];
+ for (i = 0; i < dpu_enc->num_phys_encs; i++) {
+ phys_enc = dpu_enc->phys_encs[i];

- if (phys_enc->has_intf_te && phys_enc->hw_intf->ops.vsync_sel)
- phys_enc->hw_intf->ops.vsync_sel(phys_enc->hw_intf,
- vsync_cfg.vsync_source);
- }
+ if (phys_enc->has_intf_te && phys_enc->hw_intf->ops.vsync_sel)
+ phys_enc->hw_intf->ops.vsync_sel(phys_enc->hw_intf,
+ &vsync_cfg);
}
}

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
index 6fc31d47cd1dc..65dad86edb46e 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_cmd.c
@@ -677,10 +677,11 @@ static int dpu_encoder_phys_cmd_wait_for_commit_done(
if (!dpu_encoder_phys_cmd_is_master(phys_enc))
return 0;

- if (phys_enc->hw_ctl->ops.is_started(phys_enc->hw_ctl))
- return dpu_encoder_phys_cmd_wait_for_tx_complete(phys_enc);
+ if (phys_enc->irq[INTR_IDX_CTL_START] &&
+ !phys_enc->hw_ctl->ops.is_started(phys_enc->hw_ctl))
+ return _dpu_encoder_phys_cmd_wait_for_ctl_start(phys_enc);

- return _dpu_encoder_phys_cmd_wait_for_ctl_start(phys_enc);
+ return dpu_encoder_phys_cmd_wait_for_tx_complete(phys_enc);
}

static void dpu_encoder_phys_cmd_handle_post_kickoff(
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
index 29cb854f831a3..17c0f40385f1e 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.c
@@ -67,6 +67,10 @@
#define INTF_MISR_CTRL 0x180
#define INTF_MISR_SIGNATURE 0x184

+#define INTF_WD_TIMER_0_CTL 0x230
+#define INTF_WD_TIMER_0_CTL2 0x234
+#define INTF_WD_TIMER_0_LOAD_VALUE 0x238
+
#define INTF_MUX 0x25C
#define INTF_STATUS 0x26C
#define INTF_AVR_CONTROL 0x270
@@ -477,7 +481,20 @@ static int dpu_hw_intf_get_vsync_info(struct dpu_hw_intf *intf,
}

static void dpu_hw_intf_vsync_sel(struct dpu_hw_intf *intf,
- enum dpu_vsync_source vsync_source)
+ struct dpu_vsync_source_cfg *cfg)
+{
+ struct dpu_hw_blk_reg_map *c;
+
+ if (!intf)
+ return;
+
+ c = &intf->hw;
+
+ DPU_REG_WRITE(c, INTF_TEAR_MDP_VSYNC_SEL, (cfg->vsync_source & 0xf));
+}
+
+static void dpu_hw_intf_vsync_sel_v8(struct dpu_hw_intf *intf,
+ struct dpu_vsync_source_cfg *cfg)
{
struct dpu_hw_blk_reg_map *c;

@@ -486,7 +503,30 @@ static void dpu_hw_intf_vsync_sel(struct dpu_hw_intf *intf,

c = &intf->hw;

- DPU_REG_WRITE(c, INTF_TEAR_MDP_VSYNC_SEL, (vsync_source & 0xf));
+ if (cfg->vsync_source >= DPU_VSYNC_SOURCE_WD_TIMER_4 &&
+ cfg->vsync_source <= DPU_VSYNC_SOURCE_WD_TIMER_1) {
+ pr_warn_once("DPU 8.x supports only GPIOs and timer0 as TE sources\n");
+ return;
+ }
+
+ if (cfg->vsync_source == DPU_VSYNC_SOURCE_WD_TIMER_0) {
+ u32 reg;
+
+ DPU_REG_WRITE(c, INTF_WD_TIMER_0_LOAD_VALUE,
+ CALCULATE_WD_LOAD_VALUE(cfg->frame_rate));
+
+ DPU_REG_WRITE(c, INTF_WD_TIMER_0_CTL, BIT(0)); /* clear timer */
+
+ reg = BIT(8); /* enable heartbeat timer */
+ reg |= BIT(0); /* enable WD timer */
+ reg |= BIT(1); /* select default 16 clock ticks */
+ DPU_REG_WRITE(c, INTF_WD_TIMER_0_CTL2, reg);
+
+ /* make sure that timers are enabled/disabled for vsync state */
+ wmb();
+ }
+
+ dpu_hw_intf_vsync_sel(intf, cfg);
}

static void dpu_hw_intf_disable_autorefresh(struct dpu_hw_intf *intf,
@@ -590,7 +630,10 @@ struct dpu_hw_intf *dpu_hw_intf_init(struct drm_device *dev,
c->ops.enable_tearcheck = dpu_hw_intf_enable_te;
c->ops.disable_tearcheck = dpu_hw_intf_disable_te;
c->ops.connect_external_te = dpu_hw_intf_connect_external_te;
- c->ops.vsync_sel = dpu_hw_intf_vsync_sel;
+ if (mdss_rev->core_major_ver >= 8)
+ c->ops.vsync_sel = dpu_hw_intf_vsync_sel_v8;
+ else
+ c->ops.vsync_sel = dpu_hw_intf_vsync_sel;
c->ops.disable_autorefresh = dpu_hw_intf_disable_autorefresh;
}

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
index fc23650dfbf05..4039f11068764 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_intf.h
@@ -12,6 +12,7 @@
#include "dpu_hw_util.h"

struct dpu_hw_intf;
+struct dpu_vsync_source_cfg;

/* intf timing settings */
struct dpu_hw_intf_timing_params {
@@ -108,7 +109,7 @@ struct dpu_hw_intf_ops {

int (*connect_external_te)(struct dpu_hw_intf *intf, bool enable_external_te);

- void (*vsync_sel)(struct dpu_hw_intf *intf, enum dpu_vsync_source vsync_source);
+ void (*vsync_sel)(struct dpu_hw_intf *intf, struct dpu_vsync_source_cfg *cfg);

/**
* Disable autorefresh if enabled
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
index 2040bee8d512f..2c312439cf184 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
@@ -22,13 +22,6 @@
#define TRAFFIC_SHAPER_WR_CLIENT(num) (0x060 + (num * 4))
#define TRAFFIC_SHAPER_FIXPOINT_FACTOR 4

-#define MDP_TICK_COUNT 16
-#define XO_CLK_RATE 19200
-#define MS_TICKS_IN_SEC 1000
-
-#define CALCULATE_WD_LOAD_VALUE(fps) \
- ((uint32_t)((MS_TICKS_IN_SEC * XO_CLK_RATE)/(MDP_TICK_COUNT * fps)))
-
static void dpu_hw_setup_split_pipe(struct dpu_hw_mdp *mdp,
struct split_pipe_cfg *cfg)
{
diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.h
index 67b08e99335dc..6fe65bc3bff4e 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.h
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_util.h
@@ -21,6 +21,13 @@

#define TO_S15D16(_x_)((_x_) << 7)

+#define MDP_TICK_COUNT 16
+#define XO_CLK_RATE 19200
+#define MS_TICKS_IN_SEC 1000
+
+#define CALCULATE_WD_LOAD_VALUE(fps) \
+ ((uint32_t)((MS_TICKS_IN_SEC * XO_CLK_RATE)/(MDP_TICK_COUNT * fps)))
+
extern const struct dpu_csc_cfg dpu_csc_YUV2RGB_601L;
extern const struct dpu_csc_cfg dpu_csc10_YUV2RGB_601L;
extern const struct dpu_csc_cfg dpu_csc10_rgb2yuv_601l;
diff --git a/drivers/gpu/drm/msm/disp/mdp_format.c b/drivers/gpu/drm/msm/disp/mdp_format.c
index 426782d50cb49..eebedb1a2636e 100644
--- a/drivers/gpu/drm/msm/disp/mdp_format.c
+++ b/drivers/gpu/drm/msm/disp/mdp_format.c
@@ -479,25 +479,25 @@ static const struct msm_format mdp_formats[] = {
0, BPC8, BPC8, BPC8,
C2_R_Cr, C0_G_Y, C1_B_Cb, C0_G_Y,
false, CHROMA_H2V1, 4, 2, MSM_FORMAT_FLAG_YUV,
- MDP_FETCH_LINEAR, 2),
+ MDP_FETCH_LINEAR, 1),

INTERLEAVED_YUV_FMT(UYVY,
0, BPC8, BPC8, BPC8,
C1_B_Cb, C0_G_Y, C2_R_Cr, C0_G_Y,
false, CHROMA_H2V1, 4, 2, MSM_FORMAT_FLAG_YUV,
- MDP_FETCH_LINEAR, 2),
+ MDP_FETCH_LINEAR, 1),

INTERLEAVED_YUV_FMT(YUYV,
0, BPC8, BPC8, BPC8,
C0_G_Y, C1_B_Cb, C0_G_Y, C2_R_Cr,
false, CHROMA_H2V1, 4, 2, MSM_FORMAT_FLAG_YUV,
- MDP_FETCH_LINEAR, 2),
+ MDP_FETCH_LINEAR, 1),

INTERLEAVED_YUV_FMT(YVYU,
0, BPC8, BPC8, BPC8,
C0_G_Y, C2_R_Cr, C0_G_Y, C1_B_Cb,
false, CHROMA_H2V1, 4, 2, MSM_FORMAT_FLAG_YUV,
- MDP_FETCH_LINEAR, 2),
+ MDP_FETCH_LINEAR, 1),

/* 3 plane YUV */
PLANAR_YUV_FMT(YUV420,
diff --git a/drivers/gpu/drm/panel/panel-edp.c b/drivers/gpu/drm/panel/panel-edp.c
index 663af985d1b38..d6806956f63e9 100644
--- a/drivers/gpu/drm/panel/panel-edp.c
+++ b/drivers/gpu/drm/panel/panel-edp.c
@@ -1852,6 +1852,7 @@ static const struct edp_panel_entry edp_panels[] = {
EDP_PANEL_ENTRY('A', 'U', 'O', 0x615c, &delay_200_500_e50, "B116XAN06.1"),
EDP_PANEL_ENTRY('A', 'U', 'O', 0x635c, &delay_200_500_e50, "B116XAN06.3"),
EDP_PANEL_ENTRY('A', 'U', 'O', 0x639c, &delay_200_500_e50, "B140HAK02.7"),
+ EDP_PANEL_ENTRY('A', 'U', 'O', 0x643d, &delay_200_500_e50, "B140HAN06.4"),
EDP_PANEL_ENTRY('A', 'U', 'O', 0x723c, &delay_200_500_e50, "B140XTN07.2"),
EDP_PANEL_ENTRY('A', 'U', 'O', 0x73aa, &delay_200_500_e50, "B116XTN02.3"),
EDP_PANEL_ENTRY('A', 'U', 'O', 0x8594, &delay_200_500_e50, "B133UAN01.0"),
diff --git a/drivers/gpu/drm/panel/panel-jdi-lpm102a188a.c b/drivers/gpu/drm/panel/panel-jdi-lpm102a188a.c
index 5b5082efb282b..eba560e422e8b 100644
--- a/drivers/gpu/drm/panel/panel-jdi-lpm102a188a.c
+++ b/drivers/gpu/drm/panel/panel-jdi-lpm102a188a.c
@@ -510,8 +510,10 @@ static void jdi_panel_dsi_remove(struct mipi_dsi_device *dsi)
int err;

/* only detach from host for the DSI-LINK2 interface */
- if (!jdi)
+ if (!jdi) {
mipi_dsi_detach(dsi);
+ return;
+ }

err = jdi_panel_disable(&jdi->base);
if (err < 0)
diff --git a/drivers/gpu/drm/panel/panel-lg-sw43408.c b/drivers/gpu/drm/panel/panel-lg-sw43408.c
index f3dcc39670eae..8109ded2fe563 100644
--- a/drivers/gpu/drm/panel/panel-lg-sw43408.c
+++ b/drivers/gpu/drm/panel/panel-lg-sw43408.c
@@ -294,10 +294,6 @@ static void sw43408_remove(struct mipi_dsi_device *dsi)
struct sw43408_panel *ctx = mipi_dsi_get_drvdata(dsi);
int ret;

- ret = sw43408_unprepare(&ctx->base);
- if (ret < 0)
- dev_err(&dsi->dev, "failed to unprepare panel: %d\n", ret);
-
ret = mipi_dsi_detach(dsi);
if (ret < 0)
dev_err(&dsi->dev, "failed to detach from DSI host: %d\n", ret);
diff --git a/drivers/gpu/drm/panthor/panthor_gpu.c b/drivers/gpu/drm/panthor/panthor_gpu.c
index 1ca2924e6d552..15e4371d98945 100644
--- a/drivers/gpu/drm/panthor/panthor_gpu.c
+++ b/drivers/gpu/drm/panthor/panthor_gpu.c
@@ -384,38 +384,42 @@ int panthor_gpu_l2_power_on(struct panthor_device *ptdev)
int panthor_gpu_flush_caches(struct panthor_device *ptdev,
u32 l2, u32 lsc, u32 other)
{
- bool timedout = false;
unsigned long flags;
+ int ret = 0;

/* Serialize cache flush operations. */
guard(mutex)(&ptdev->gpu->cache_flush_lock);

spin_lock_irqsave(&ptdev->gpu->reqs_lock, flags);
- if (!drm_WARN_ON(&ptdev->base,
- ptdev->gpu->pending_reqs & GPU_IRQ_CLEAN_CACHES_COMPLETED)) {
+ if (!(ptdev->gpu->pending_reqs & GPU_IRQ_CLEAN_CACHES_COMPLETED)) {
ptdev->gpu->pending_reqs |= GPU_IRQ_CLEAN_CACHES_COMPLETED;
gpu_write(ptdev, GPU_CMD, GPU_FLUSH_CACHES(l2, lsc, other));
+ } else {
+ ret = -EIO;
}
spin_unlock_irqrestore(&ptdev->gpu->reqs_lock, flags);

+ if (ret)
+ return ret;
+
if (!wait_event_timeout(ptdev->gpu->reqs_acked,
!(ptdev->gpu->pending_reqs & GPU_IRQ_CLEAN_CACHES_COMPLETED),
msecs_to_jiffies(100))) {
spin_lock_irqsave(&ptdev->gpu->reqs_lock, flags);
if ((ptdev->gpu->pending_reqs & GPU_IRQ_CLEAN_CACHES_COMPLETED) != 0 &&
!(gpu_read(ptdev, GPU_INT_RAWSTAT) & GPU_IRQ_CLEAN_CACHES_COMPLETED))
- timedout = true;
+ ret = -ETIMEDOUT;
else
ptdev->gpu->pending_reqs &= ~GPU_IRQ_CLEAN_CACHES_COMPLETED;
spin_unlock_irqrestore(&ptdev->gpu->reqs_lock, flags);
}

- if (timedout) {
+ if (ret) {
+ panthor_device_schedule_reset(ptdev);
drm_err(&ptdev->base, "Flush caches timeout");
- return -ETIMEDOUT;
}

- return 0;
+ return ret;
}

/**
@@ -455,6 +459,7 @@ int panthor_gpu_soft_reset(struct panthor_device *ptdev)
return -ETIMEDOUT;
}

+ ptdev->gpu->pending_reqs = 0;
return 0;
}

diff --git a/drivers/gpu/drm/panthor/panthor_mmu.c b/drivers/gpu/drm/panthor/panthor_mmu.c
index ed769749ec354..e221708cf1aa3 100644
--- a/drivers/gpu/drm/panthor/panthor_mmu.c
+++ b/drivers/gpu/drm/panthor/panthor_mmu.c
@@ -1554,6 +1554,10 @@ static void panthor_vm_destroy(struct panthor_vm *vm)

vm->destroyed = true;

+ /* Tell scheduler to stop all GPU work related to this VM */
+ if (refcount_read(&vm->as.active_cnt) > 0)
+ panthor_sched_prepare_for_vm_destruction(vm->ptdev);
+
mutex_lock(&vm->heaps.lock);
panthor_heap_pool_destroy(vm->heaps.pool);
vm->heaps.pool = NULL;
diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
index 1d95decddc273..2d4dacece655f 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.c
+++ b/drivers/gpu/drm/panthor/panthor_sched.c
@@ -1853,10 +1853,10 @@ struct panthor_sched_tick_ctx {
struct list_head groups[PANTHOR_CSG_PRIORITY_COUNT];
u32 idle_group_count;
u32 group_count;
- enum panthor_csg_priority min_priority;
struct panthor_vm *vms[MAX_CS_PER_CSG];
u32 as_count;
bool immediate_tick;
+ bool stop_tick;
u32 csg_upd_failed_mask;
};

@@ -1921,17 +1921,21 @@ tick_ctx_pick_groups_from_list(const struct panthor_scheduler *sched,
if (!owned_by_tick_ctx)
group_get(group);

- list_move_tail(&group->run_node, &ctx->groups[group->priority]);
ctx->group_count++;
+
+ /* If we have more than one active group with the same priority,
+ * we need to keep ticking to rotate the CSG priority.
+ */
if (group_is_idle(group))
ctx->idle_group_count++;
+ else if (!list_empty(&ctx->groups[group->priority]))
+ ctx->stop_tick = false;
+
+ list_move_tail(&group->run_node, &ctx->groups[group->priority]);

if (i == ctx->as_count)
ctx->vms[ctx->as_count++] = group->vm;

- if (ctx->min_priority > group->priority)
- ctx->min_priority = group->priority;
-
if (tick_ctx_is_full(sched, ctx))
return;
}
@@ -1940,31 +1944,22 @@ tick_ctx_pick_groups_from_list(const struct panthor_scheduler *sched,
static void
tick_ctx_insert_old_group(struct panthor_scheduler *sched,
struct panthor_sched_tick_ctx *ctx,
- struct panthor_group *group,
- bool full_tick)
+ struct panthor_group *group)
{
struct panthor_csg_slot *csg_slot = &sched->csg_slots[group->csg_id];
struct panthor_group *other_group;

- if (!full_tick) {
- list_add_tail(&group->run_node, &ctx->old_groups[group->priority]);
- return;
- }
-
- /* Rotate to make sure groups with lower CSG slot
- * priorities have a chance to get a higher CSG slot
- * priority next time they get picked. This priority
- * has an impact on resource request ordering, so it's
- * important to make sure we don't let one group starve
- * all other groups with the same group priority.
- */
+ /* Class groups in descending priority order so we can easily rotate. */
list_for_each_entry(other_group,
&ctx->old_groups[csg_slot->group->priority],
run_node) {
struct panthor_csg_slot *other_csg_slot = &sched->csg_slots[other_group->csg_id];

- if (other_csg_slot->priority > csg_slot->priority) {
- list_add_tail(&csg_slot->group->run_node, &other_group->run_node);
+ /* Our group has a higher prio than the one we're testing against,
+ * place it just before.
+ */
+ if (csg_slot->priority > other_csg_slot->priority) {
+ list_add_tail(&group->run_node, &other_group->run_node);
return;
}
}
@@ -1974,8 +1969,7 @@ tick_ctx_insert_old_group(struct panthor_scheduler *sched,

static void
tick_ctx_init(struct panthor_scheduler *sched,
- struct panthor_sched_tick_ctx *ctx,
- bool full_tick)
+ struct panthor_sched_tick_ctx *ctx)
{
struct panthor_device *ptdev = sched->ptdev;
struct panthor_csg_slots_upd_ctx upd_ctx;
@@ -1985,7 +1979,7 @@ tick_ctx_init(struct panthor_scheduler *sched,
memset(ctx, 0, sizeof(*ctx));
csgs_upd_ctx_init(&upd_ctx);

- ctx->min_priority = PANTHOR_CSG_PRIORITY_COUNT;
+ ctx->stop_tick = true;
for (i = 0; i < ARRAY_SIZE(ctx->groups); i++) {
INIT_LIST_HEAD(&ctx->groups[i]);
INIT_LIST_HEAD(&ctx->old_groups[i]);
@@ -2013,7 +2007,7 @@ tick_ctx_init(struct panthor_scheduler *sched,
group->fatal_queues |= GENMASK(group->queue_count - 1, 0);
}

- tick_ctx_insert_old_group(sched, ctx, group, full_tick);
+ tick_ctx_insert_old_group(sched, ctx, group);
csgs_upd_ctx_queue_reqs(ptdev, &upd_ctx, i,
csg_iface->output->ack ^ CSG_STATUS_UPDATE,
CSG_STATUS_UPDATE);
@@ -2297,32 +2291,18 @@ static u64
tick_ctx_update_resched_target(struct panthor_scheduler *sched,
const struct panthor_sched_tick_ctx *ctx)
{
- /* We had space left, no need to reschedule until some external event happens. */
- if (!tick_ctx_is_full(sched, ctx))
- goto no_tick;
-
- /* If idle groups were scheduled, no need to wake up until some external
- * event happens (group unblocked, new job submitted, ...).
- */
- if (ctx->idle_group_count)
- goto no_tick;
+ u64 resched_target;

- if (drm_WARN_ON(&sched->ptdev->base, ctx->min_priority >= PANTHOR_CSG_PRIORITY_COUNT))
+ if (ctx->stop_tick)
goto no_tick;

- /* If there are groups of the same priority waiting, we need to
- * keep the scheduler ticking, otherwise, we'll just wait for
- * new groups with higher priority to be queued.
- */
- if (!list_empty(&sched->groups.runnable[ctx->min_priority])) {
- u64 resched_target = sched->last_tick + sched->tick_period;
+ resched_target = sched->last_tick + sched->tick_period;

- if (time_before64(sched->resched_target, sched->last_tick) ||
- time_before64(resched_target, sched->resched_target))
- sched->resched_target = resched_target;
+ if (time_before64(sched->resched_target, sched->last_tick) ||
+ time_before64(resched_target, sched->resched_target))
+ sched->resched_target = resched_target;

- return sched->resched_target - sched->last_tick;
- }
+ return sched->resched_target - sched->last_tick;

no_tick:
sched->resched_target = U64_MAX;
@@ -2335,9 +2315,11 @@ static void tick_work(struct work_struct *work)
tick_work.work);
struct panthor_device *ptdev = sched->ptdev;
struct panthor_sched_tick_ctx ctx;
+ u64 resched_target = sched->resched_target;
u64 remaining_jiffies = 0, resched_delay;
u64 now = get_jiffies_64();
int prio, ret, cookie;
+ bool full_tick;

if (!drm_dev_enter(&ptdev->base, &cookie))
return;
@@ -2346,18 +2328,24 @@ static void tick_work(struct work_struct *work)
if (drm_WARN_ON(&ptdev->base, ret))
goto out_dev_exit;

- if (time_before64(now, sched->resched_target))
- remaining_jiffies = sched->resched_target - now;
+ /* If the tick is stopped, calculate when the next tick would be */
+ if (resched_target == U64_MAX)
+ resched_target = sched->last_tick + sched->tick_period;
+
+ if (time_before64(now, resched_target))
+ remaining_jiffies = resched_target - now;
+
+ full_tick = remaining_jiffies == 0;

mutex_lock(&sched->lock);
if (panthor_device_reset_is_pending(sched->ptdev))
goto out_unlock;

- tick_ctx_init(sched, &ctx, remaining_jiffies != 0);
+ tick_ctx_init(sched, &ctx);
if (ctx.csg_upd_failed_mask)
goto out_cleanup_ctx;

- if (remaining_jiffies) {
+ if (!full_tick) {
/* Scheduling forced in the middle of a tick. Only RT groups
* can preempt non-RT ones. Currently running RT groups can't be
* preempted.
@@ -2379,9 +2367,29 @@ static void tick_work(struct work_struct *work)
for (prio = PANTHOR_CSG_PRIORITY_COUNT - 1;
prio >= 0 && !tick_ctx_is_full(sched, &ctx);
prio--) {
+ struct panthor_group *old_highest_prio_group =
+ list_first_entry_or_null(&ctx.old_groups[prio],
+ struct panthor_group, run_node);
+
+ /* Pull out the group with the highest prio for rotation. */
+ if (old_highest_prio_group)
+ list_del(&old_highest_prio_group->run_node);
+
+ /* Re-insert old active groups so they get a chance to run with higher prio. */
+ tick_ctx_pick_groups_from_list(sched, &ctx, &ctx.old_groups[prio], true, true);
+
+ /* Fill the remaining slots with runnable groups. */
tick_ctx_pick_groups_from_list(sched, &ctx, &sched->groups.runnable[prio],
true, false);
- tick_ctx_pick_groups_from_list(sched, &ctx, &ctx.old_groups[prio], true, true);
+
+ /* Re-insert the old group with the highest prio, and give it a chance to be
+ * scheduled again (but with a lower prio) if there's room left.
+ */
+ if (old_highest_prio_group) {
+ list_add_tail(&old_highest_prio_group->run_node, &ctx.old_groups[prio]);
+ tick_ctx_pick_groups_from_list(sched, &ctx, &ctx.old_groups[prio],
+ true, true);
+ }
}

/* If we have free CSG slots left, pick idle groups */
@@ -2506,14 +2514,33 @@ static void sync_upd_work(struct work_struct *work)
sched_queue_delayed_work(sched, tick, 0);
}

+static void sched_resume_tick(struct panthor_device *ptdev)
+{
+ struct panthor_scheduler *sched = ptdev->scheduler;
+ u64 delay_jiffies, now;
+
+ drm_WARN_ON(&ptdev->base, sched->resched_target != U64_MAX);
+
+ /* Scheduler tick was off, recalculate the resched_target based on the
+ * last tick event, and queue the scheduler work.
+ */
+ now = get_jiffies_64();
+ sched->resched_target = sched->last_tick + sched->tick_period;
+ if (sched->used_csg_slot_count == sched->csg_slot_count &&
+ time_before64(now, sched->resched_target))
+ delay_jiffies = min_t(unsigned long, sched->resched_target - now, ULONG_MAX);
+ else
+ delay_jiffies = 0;
+
+ sched_queue_delayed_work(sched, tick, delay_jiffies);
+}
+
static void group_schedule_locked(struct panthor_group *group, u32 queue_mask)
{
struct panthor_device *ptdev = group->ptdev;
struct panthor_scheduler *sched = ptdev->scheduler;
struct list_head *queue = &sched->groups.runnable[group->priority];
- u64 delay_jiffies = 0;
bool was_idle;
- u64 now;

if (!group_can_run(group))
return;
@@ -2558,13 +2585,7 @@ static void group_schedule_locked(struct panthor_group *group, u32 queue_mask)
/* Scheduler tick was off, recalculate the resched_target based on the
* last tick event, and queue the scheduler work.
*/
- now = get_jiffies_64();
- sched->resched_target = sched->last_tick + sched->tick_period;
- if (sched->used_csg_slot_count == sched->csg_slot_count &&
- time_before64(now, sched->resched_target))
- delay_jiffies = min_t(unsigned long, sched->resched_target - now, ULONG_MAX);
-
- sched_queue_delayed_work(sched, tick, delay_jiffies);
+ sched_resume_tick(ptdev);
}

static void queue_stop(struct panthor_queue *queue,
@@ -2637,6 +2658,20 @@ void panthor_sched_report_mmu_fault(struct panthor_device *ptdev)
panthor_sched_immediate_tick(ptdev);
}

+void panthor_sched_prepare_for_vm_destruction(struct panthor_device *ptdev)
+{
+ /* FW can write out internal state, like the heap context, during CSG
+ * suspend. It is therefore important that the scheduler has fully
+ * evicted any pending and related groups before VM destruction can
+ * safely continue. Failure to do so can lead to GPU page faults.
+ * A controlled termination of a Panthor instance involves destroying
+ * the group(s) before the VM. This means any relevant group eviction
+ * has already been initiated by this point, and we just need to
+ * ensure that any pending tick_work() has been completed.
+ */
+ flush_work(&ptdev->scheduler->tick_work.work);
+}
+
void panthor_sched_resume(struct panthor_device *ptdev)
{
/* Force a tick to re-evaluate after a resume. */
@@ -3121,6 +3156,18 @@ queue_run_job(struct drm_sched_job *sched_job)

group_schedule_locked(group, BIT(job->queue_idx));
} else {
+ u32 queue_mask = BIT(job->queue_idx);
+ bool resume_tick = group_is_idle(group) &&
+ (group->idle_queues & queue_mask) &&
+ !(group->blocked_queues & queue_mask) &&
+ sched->resched_target == U64_MAX;
+
+ /* We just added something to the queue, so it's no longer idle. */
+ group->idle_queues &= ~queue_mask;
+
+ if (resume_tick)
+ sched_resume_tick(ptdev);
+
gpu_write(ptdev, CSF_DOORBELL(queue->doorbell_id), 1);
if (!sched->pm.has_ref &&
!(group->blocked_queues & BIT(job->queue_idx))) {
diff --git a/drivers/gpu/drm/panthor/panthor_sched.h b/drivers/gpu/drm/panthor/panthor_sched.h
index 3a30d2328b308..666d1655ee18e 100644
--- a/drivers/gpu/drm/panthor/panthor_sched.h
+++ b/drivers/gpu/drm/panthor/panthor_sched.h
@@ -45,6 +45,7 @@ void panthor_sched_suspend(struct panthor_device *ptdev);
void panthor_sched_resume(struct panthor_device *ptdev);

void panthor_sched_report_mmu_fault(struct panthor_device *ptdev);
+void panthor_sched_prepare_for_vm_destruction(struct panthor_device *ptdev);
void panthor_sched_report_fw_events(struct panthor_device *ptdev, u32 events);

#endif
diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
index 9deb91970d4df..f12227145ef08 100644
--- a/drivers/gpu/drm/radeon/si_dpm.c
+++ b/drivers/gpu/drm/radeon/si_dpm.c
@@ -2925,6 +2925,11 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
max_sclk = 60000;
max_mclk = 80000;
}
+ if ((rdev->pdev->device == 0x666f) &&
+ (rdev->pdev->revision == 0x00)) {
+ max_sclk = 80000;
+ max_mclk = 95000;
+ }
} else if (rdev->family == CHIP_OLAND) {
if ((rdev->pdev->revision == 0xC7) ||
(rdev->pdev->revision == 0x80) ||
diff --git a/drivers/gpu/drm/tests/drm_gem_shmem_test.c b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
index 925fbc2cda700..0c8c46ca158c0 100644
--- a/drivers/gpu/drm/tests/drm_gem_shmem_test.c
+++ b/drivers/gpu/drm/tests/drm_gem_shmem_test.c
@@ -194,7 +194,7 @@ static void drm_gem_shmem_test_vmap(struct kunit *test)
* scatter/gather table large enough to accommodate the backing memory
* is successfully exported.
*/
-static void drm_gem_shmem_test_get_pages_sgt(struct kunit *test)
+static void drm_gem_shmem_test_get_sg_table(struct kunit *test)
{
struct drm_device *drm_dev = test->priv;
struct drm_gem_shmem_object *shmem;
@@ -236,7 +236,7 @@ static void drm_gem_shmem_test_get_pages_sgt(struct kunit *test)
* backing pages are pinned and a scatter/gather table large enough to
* accommodate the backing memory is successfully exported.
*/
-static void drm_gem_shmem_test_get_sg_table(struct kunit *test)
+static void drm_gem_shmem_test_get_pages_sgt(struct kunit *test)
{
struct drm_device *drm_dev = test->priv;
struct drm_gem_shmem_object *shmem;
@@ -366,8 +366,8 @@ static struct kunit_case drm_gem_shmem_test_cases[] = {
KUNIT_CASE(drm_gem_shmem_test_obj_create_private),
KUNIT_CASE(drm_gem_shmem_test_pin_pages),
KUNIT_CASE(drm_gem_shmem_test_vmap),
- KUNIT_CASE(drm_gem_shmem_test_get_pages_sgt),
KUNIT_CASE(drm_gem_shmem_test_get_sg_table),
+ KUNIT_CASE(drm_gem_shmem_test_get_pages_sgt),
KUNIT_CASE(drm_gem_shmem_test_madvise),
KUNIT_CASE(drm_gem_shmem_test_purge),
{}
diff --git a/drivers/gpu/drm/v3d/v3d_drv.c b/drivers/gpu/drm/v3d/v3d_drv.c
index 7c17108da7d2d..f45fdd7d542f6 100644
--- a/drivers/gpu/drm/v3d/v3d_drv.c
+++ b/drivers/gpu/drm/v3d/v3d_drv.c
@@ -302,6 +302,8 @@ static int v3d_platform_drm_probe(struct platform_device *pdev)
if (ret)
goto clk_disable;

+ dma_set_max_seg_size(&pdev->dev, UINT_MAX);
+
v3d->va_width = 30 + V3D_GET_FIELD(mmu_debug, V3D_MMU_VA_WIDTH);

ident1 = V3D_READ(V3D_HUB_IDENT1);
diff --git a/drivers/gpu/drm/xe/xe_assert.h b/drivers/gpu/drm/xe/xe_assert.h
index e22bbf57fca75..04d6b95c6d878 100644
--- a/drivers/gpu/drm/xe/xe_assert.h
+++ b/drivers/gpu/drm/xe/xe_assert.h
@@ -10,7 +10,7 @@

#include <drm/drm_print.h>

-#include "xe_device_types.h"
+#include "xe_gt_types.h"
#include "xe_step.h"

/**
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index 161c73e676640..3ce5bf902f700 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -783,6 +783,7 @@ int xe_device_probe(struct xe_device *xe)
static void xe_device_remove_display(struct xe_device *xe)
{
xe_display_unregister(xe);
+ drm_dev_unregister(&xe->drm);

drm_dev_unplug(&xe->drm);
xe_display_driver_remove(xe);
diff --git a/drivers/gpu/drm/xe/xe_device.h b/drivers/gpu/drm/xe/xe_device.h
index 34620ef855c0c..5dad24fc487ca 100644
--- a/drivers/gpu/drm/xe/xe_device.h
+++ b/drivers/gpu/drm/xe/xe_device.h
@@ -9,6 +9,7 @@
#include <drm/drm_util.h>

#include "xe_device_types.h"
+#include "xe_gt_types.h"

static inline struct xe_device *to_xe_device(const struct drm_device *dev)
{
@@ -138,7 +139,7 @@ static inline bool xe_device_uc_enabled(struct xe_device *xe)

static inline struct xe_force_wake *gt_to_fw(struct xe_gt *gt)
{
- return &gt->mmio.fw;
+ return &gt->pm.fw;
}

void xe_device_assert_mem_access(struct xe_device *xe);
diff --git a/drivers/gpu/drm/xe/xe_device_types.h b/drivers/gpu/drm/xe/xe_device_types.h
index 687f3a9039bb1..78a1f8b78e269 100644
--- a/drivers/gpu/drm/xe/xe_device_types.h
+++ b/drivers/gpu/drm/xe/xe_device_types.h
@@ -14,7 +14,6 @@

#include "xe_devcoredump_types.h"
#include "xe_heci_gsc.h"
-#include "xe_gt_types.h"
#include "xe_lmtt_types.h"
#include "xe_memirq_types.h"
#include "xe_oa.h"
@@ -107,6 +106,45 @@ struct xe_mem_region {
void __iomem *mapping;
};

+/**
+ * struct xe_mmio - register mmio structure
+ *
+ * Represents an MMIO region that the CPU may use to access registers. A
+ * region may share its IO map with other regions (e.g., all GTs within a
+ * tile share the same map with their parent tile, but represent different
+ * subregions of the overall IO space).
+ */
+struct xe_mmio {
+ /** @tile: Backpointer to tile, used for tracing */
+ struct xe_tile *tile;
+
+ /** @regs: Map used to access registers. */
+ void __iomem *regs;
+
+ /**
+ * @sriov_vf_gt: Backpointer to GT.
+ *
+ * This pointer is only set for GT MMIO regions and only when running
+ * as an SRIOV VF structure
+ */
+ struct xe_gt *sriov_vf_gt;
+
+ /**
+ * @regs_size: Length of the register region within the map.
+ *
+ * The size of the iomap set in *regs is generally larger than the
+ * register mmio space since it includes unused regions and/or
+ * non-register regions such as the GGTT PTEs.
+ */
+ size_t regs_size;
+
+ /** @adj_limit: adjust MMIO address if address is below this value */
+ u32 adj_limit;
+
+ /** @adj_offset: offset to add to MMIO address when adjusting */
+ u32 adj_offset;
+};
+
/**
* struct xe_tile - hardware tile structure
*
@@ -148,26 +186,14 @@ struct xe_tile {
* * 4MB-8MB: reserved
* * 8MB-16MB: global GTT
*/
- struct {
- /** @mmio.size: size of tile's MMIO space */
- size_t size;
-
- /** @mmio.regs: pointer to tile's MMIO space (starting with registers) */
- void __iomem *regs;
- } mmio;
+ struct xe_mmio mmio;

/**
* @mmio_ext: MMIO-extension info for a tile.
*
* Each tile has its own additional 256MB (28-bit) MMIO-extension space.
*/
- struct {
- /** @mmio_ext.size: size of tile's additional MMIO-extension space */
- size_t size;
-
- /** @mmio_ext.regs: pointer to tile's additional MMIO-extension space */
- void __iomem *regs;
- } mmio_ext;
+ struct xe_mmio mmio_ext;

/** @mem: memory management info for tile */
struct {
diff --git a/drivers/gpu/drm/xe/xe_gt_freq.c b/drivers/gpu/drm/xe/xe_gt_freq.c
index a05fde2c7b122..62a8aa2fdbb0d 100644
--- a/drivers/gpu/drm/xe/xe_gt_freq.c
+++ b/drivers/gpu/drm/xe/xe_gt_freq.c
@@ -11,9 +11,9 @@
#include <drm/drm_managed.h>
#include <drm/drm_print.h>

-#include "xe_device_types.h"
#include "xe_gt_sysfs.h"
#include "xe_gt_throttle.h"
+#include "xe_gt_types.h"
#include "xe_guc_pc.h"
#include "xe_pm.h"

diff --git a/drivers/gpu/drm/xe/xe_gt_printk.h b/drivers/gpu/drm/xe/xe_gt_printk.h
index d6228baaff1ef..5dc71394372d6 100644
--- a/drivers/gpu/drm/xe/xe_gt_printk.h
+++ b/drivers/gpu/drm/xe/xe_gt_printk.h
@@ -8,7 +8,7 @@

#include <drm/drm_print.h>

-#include "xe_device_types.h"
+#include "xe_gt_types.h"

#define xe_gt_printk(_gt, _level, _fmt, ...) \
drm_##_level(&gt_to_xe(_gt)->drm, "GT%u: " _fmt, (_gt)->info.id, ##__VA_ARGS__)
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index 3d1c51de02687..a287b98ee70b4 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -6,6 +6,7 @@
#ifndef _XE_GT_TYPES_H_
#define _XE_GT_TYPES_H_

+#include "xe_device_types.h"
#include "xe_force_wake_types.h"
#include "xe_gt_idle_types.h"
#include "xe_gt_sriov_pf_types.h"
@@ -145,19 +146,20 @@ struct xe_gt {
/**
* @mmio: mmio info for GT. All GTs within a tile share the same
* register space, but have their own copy of GSI registers at a
- * specific offset, as well as their own forcewake handling.
+ * specific offset.
+ */
+ struct xe_mmio mmio;
+
+ /**
+ * @pm: power management info for GT. The driver uses the GT's
+ * "force wake" interface to wake up specific parts of the GT hardware
+ * from C6 sleep states and ensure the hardware remains awake while it
+ * is being actively used.
*/
struct {
- /** @mmio.fw: force wake for GT */
+ /** @pm.fw: force wake for GT */
struct xe_force_wake fw;
- /**
- * @mmio.adj_limit: adjust MMIO address if address is below this
- * value
- */
- u32 adj_limit;
- /** @mmio.adj_offset: offect to add to MMIO address when adjusting */
- u32 adj_offset;
- } mmio;
+ } pm;

/** @sriov: virtualization data related to GT */
union {
diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index f316be1e96452..d0ef3a6a68d2c 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -1112,7 +1112,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
if (exec_queue_reset(q))
err = -EIO;

- if (!exec_queue_destroyed(q)) {
+ if (!exec_queue_destroyed(q) && xe_uc_fw_is_running(&guc->fw)) {
/*
* Wait for any pending G2H to flush out before
* modifying state
@@ -1143,6 +1143,7 @@ guc_exec_queue_timedout_job(struct drm_sched_job *drm_job)
*/
smp_rmb();
ret = wait_event_timeout(guc->ct.wq,
+ !xe_uc_fw_is_running(&guc->fw) ||
!exec_queue_pending_disable(q) ||
guc_read_stopped(guc), HZ * 5);
if (!ret || guc_read_stopped(guc)) {
diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c
index 3fd462fda6255..449e6c5636712 100644
--- a/drivers/gpu/drm/xe/xe_mmio.c
+++ b/drivers/gpu/drm/xe/xe_mmio.c
@@ -36,13 +36,19 @@ static void tiles_fini(void *arg)
/*
* On multi-tile devices, partition the BAR space for MMIO on each tile,
* possibly accounting for register override on the number of tiles available.
+ * tile_mmio_size contains both the tile's 4MB register space, as well as
+ * additional space for the GTT and other (possibly unused) regions).
* Resulting memory layout is like below:
*
* .----------------------. <- tile_count * tile_mmio_size
* | .... |
* |----------------------| <- 2 * tile_mmio_size
+ * | tile1 GTT + other |
+ * |----------------------| <- 1 * tile_mmio_size + 4MB
* | tile1->mmio.regs |
* |----------------------| <- 1 * tile_mmio_size
+ * | tile0 GTT + other |
+ * |----------------------| <- 4MB
* | tile0->mmio.regs |
* '----------------------' <- 0MB
*/
@@ -61,16 +67,16 @@ static void mmio_multi_tile_setup(struct xe_device *xe, size_t tile_mmio_size)

/* Possibly override number of tile based on configuration register */
if (!xe->info.skip_mtcfg) {
- struct xe_gt *gt = xe_root_mmio_gt(xe);
+ struct xe_mmio *mmio = xe_root_tile_mmio(xe);
u8 tile_count;
u32 mtcfg;

/*
* Although the per-tile mmio regs are not yet initialized, this
- * is fine as it's going to the root gt, that's guaranteed to be
- * initialized earlier in xe_mmio_init()
+ * is fine as it's going to the root tile's mmio, that's
+ * guaranteed to be initialized earlier in xe_mmio_init()
*/
- mtcfg = xe_mmio_read64_2x32(gt, XEHP_MTCFG_ADDR);
+ mtcfg = xe_mmio_read64_2x32(mmio, XEHP_MTCFG_ADDR);
tile_count = REG_FIELD_GET(TILE_COUNT, mtcfg) + 1;

if (tile_count < xe->info.tile_count) {
@@ -90,8 +96,9 @@ static void mmio_multi_tile_setup(struct xe_device *xe, size_t tile_mmio_size)

regs = xe->mmio.regs;
for_each_tile(tile, xe, id) {
- tile->mmio.size = tile_mmio_size;
+ tile->mmio.regs_size = SZ_4M;
tile->mmio.regs = regs;
+ tile->mmio.tile = tile;
regs += tile_mmio_size;
}
}
@@ -126,8 +133,9 @@ static void mmio_extension_setup(struct xe_device *xe, size_t tile_mmio_size,

regs = xe->mmio.regs + tile_mmio_size * xe->info.tile_count;
for_each_tile(tile, xe, id) {
- tile->mmio_ext.size = tile_mmio_ext_size;
+ tile->mmio_ext.regs_size = tile_mmio_ext_size;
tile->mmio_ext.regs = regs;
+ tile->mmio_ext.tile = tile;
regs += tile_mmio_ext_size;
}
}
@@ -172,122 +180,118 @@ int xe_mmio_init(struct xe_device *xe)
}

/* Setup first tile; other tiles (if present) will be setup later. */
- root_tile->mmio.size = SZ_16M;
+ root_tile->mmio.regs_size = SZ_4M;
root_tile->mmio.regs = xe->mmio.regs;
+ root_tile->mmio.tile = root_tile;

return devm_add_action_or_reset(xe->drm.dev, mmio_fini, xe);
}

-static void mmio_flush_pending_writes(struct xe_gt *gt)
+static void mmio_flush_pending_writes(struct xe_mmio *mmio)
{
#define DUMMY_REG_OFFSET 0x130030
- struct xe_tile *tile = gt_to_tile(gt);
int i;

- if (tile->xe->info.platform != XE_LUNARLAKE)
+ if (mmio->tile->xe->info.platform != XE_LUNARLAKE)
return;

/* 4 dummy writes */
for (i = 0; i < 4; i++)
- writel(0, tile->mmio.regs + DUMMY_REG_OFFSET);
+ writel(0, mmio->regs + DUMMY_REG_OFFSET);
}

-u8 xe_mmio_read8(struct xe_gt *gt, struct xe_reg reg)
+u8 __xe_mmio_read8(struct xe_mmio *mmio, struct xe_reg reg)
{
- struct xe_tile *tile = gt_to_tile(gt);
- u32 addr = xe_mmio_adjusted_addr(gt, reg.addr);
+ u32 addr = xe_mmio_adjusted_addr(mmio, reg.addr);
u8 val;

/* Wa_15015404425 */
- mmio_flush_pending_writes(gt);
+ mmio_flush_pending_writes(mmio);

- val = readb((reg.ext ? tile->mmio_ext.regs : tile->mmio.regs) + addr);
- trace_xe_reg_rw(gt, false, addr, val, sizeof(val));
+ val = readb(mmio->regs + addr);
+ trace_xe_reg_rw(mmio, false, addr, val, sizeof(val));

return val;
}

-u16 xe_mmio_read16(struct xe_gt *gt, struct xe_reg reg)
+u16 __xe_mmio_read16(struct xe_mmio *mmio, struct xe_reg reg)
{
- struct xe_tile *tile = gt_to_tile(gt);
- u32 addr = xe_mmio_adjusted_addr(gt, reg.addr);
+ u32 addr = xe_mmio_adjusted_addr(mmio, reg.addr);
u16 val;

/* Wa_15015404425 */
- mmio_flush_pending_writes(gt);
+ mmio_flush_pending_writes(mmio);

- val = readw((reg.ext ? tile->mmio_ext.regs : tile->mmio.regs) + addr);
- trace_xe_reg_rw(gt, false, addr, val, sizeof(val));
+ val = readw(mmio->regs + addr);
+ trace_xe_reg_rw(mmio, false, addr, val, sizeof(val));

return val;
}

-void xe_mmio_write32(struct xe_gt *gt, struct xe_reg reg, u32 val)
+void __xe_mmio_write32(struct xe_mmio *mmio, struct xe_reg reg, u32 val)
{
- struct xe_tile *tile = gt_to_tile(gt);
- u32 addr = xe_mmio_adjusted_addr(gt, reg.addr);
+ u32 addr = xe_mmio_adjusted_addr(mmio, reg.addr);

- trace_xe_reg_rw(gt, true, addr, val, sizeof(val));
+ trace_xe_reg_rw(mmio, true, addr, val, sizeof(val));

- if (!reg.vf && IS_SRIOV_VF(gt_to_xe(gt)))
- xe_gt_sriov_vf_write32(gt, reg, val);
+ if (!reg.vf && mmio->sriov_vf_gt)
+ xe_gt_sriov_vf_write32(mmio->sriov_vf_gt, reg, val);
else
- writel(val, (reg.ext ? tile->mmio_ext.regs : tile->mmio.regs) + addr);
+ writel(val, mmio->regs + addr);
}

-u32 xe_mmio_read32(struct xe_gt *gt, struct xe_reg reg)
+u32 __xe_mmio_read32(struct xe_mmio *mmio, struct xe_reg reg)
{
- struct xe_tile *tile = gt_to_tile(gt);
- u32 addr = xe_mmio_adjusted_addr(gt, reg.addr);
+ u32 addr = xe_mmio_adjusted_addr(mmio, reg.addr);
u32 val;

/* Wa_15015404425 */
- mmio_flush_pending_writes(gt);
+ mmio_flush_pending_writes(mmio);

- if (!reg.vf && IS_SRIOV_VF(gt_to_xe(gt)))
- val = xe_gt_sriov_vf_read32(gt, reg);
+ if (!reg.vf && mmio->sriov_vf_gt)
+ val = xe_gt_sriov_vf_read32(mmio->sriov_vf_gt, reg);
else
- val = readl((reg.ext ? tile->mmio_ext.regs : tile->mmio.regs) + addr);
+ val = readl(mmio->regs + addr);

- trace_xe_reg_rw(gt, false, addr, val, sizeof(val));
+ trace_xe_reg_rw(mmio, false, addr, val, sizeof(val));

return val;
}

-u32 xe_mmio_rmw32(struct xe_gt *gt, struct xe_reg reg, u32 clr, u32 set)
+u32 __xe_mmio_rmw32(struct xe_mmio *mmio, struct xe_reg reg, u32 clr, u32 set)
{
u32 old, reg_val;

- old = xe_mmio_read32(gt, reg);
+ old = xe_mmio_read32(mmio, reg);
reg_val = (old & ~clr) | set;
- xe_mmio_write32(gt, reg, reg_val);
+ xe_mmio_write32(mmio, reg, reg_val);

return old;
}

-int xe_mmio_write32_and_verify(struct xe_gt *gt,
- struct xe_reg reg, u32 val, u32 mask, u32 eval)
+int __xe_mmio_write32_and_verify(struct xe_mmio *mmio,
+ struct xe_reg reg, u32 val, u32 mask, u32 eval)
{
u32 reg_val;

- xe_mmio_write32(gt, reg, val);
- reg_val = xe_mmio_read32(gt, reg);
+ xe_mmio_write32(mmio, reg, val);
+ reg_val = xe_mmio_read32(mmio, reg);

return (reg_val & mask) != eval ? -EINVAL : 0;
}

-bool xe_mmio_in_range(const struct xe_gt *gt,
- const struct xe_mmio_range *range,
- struct xe_reg reg)
+bool __xe_mmio_in_range(const struct xe_mmio *mmio,
+ const struct xe_mmio_range *range,
+ struct xe_reg reg)
{
- u32 addr = xe_mmio_adjusted_addr(gt, reg.addr);
+ u32 addr = xe_mmio_adjusted_addr(mmio, reg.addr);

return range && addr >= range->start && addr <= range->end;
}

/**
* xe_mmio_read64_2x32() - Read a 64-bit register as two 32-bit reads
- * @gt: MMIO target GT
+ * @mmio: MMIO target
* @reg: register to read value from
*
* Although Intel GPUs have some 64-bit registers, the hardware officially
@@ -307,21 +311,21 @@ bool xe_mmio_in_range(const struct xe_gt *gt,
*
* Returns the value of the 64-bit register.
*/
-u64 xe_mmio_read64_2x32(struct xe_gt *gt, struct xe_reg reg)
+u64 __xe_mmio_read64_2x32(struct xe_mmio *mmio, struct xe_reg reg)
{
struct xe_reg reg_udw = { .addr = reg.addr + 0x4 };
u32 ldw, udw, oldudw, retries;

- reg.addr = xe_mmio_adjusted_addr(gt, reg.addr);
- reg_udw.addr = xe_mmio_adjusted_addr(gt, reg_udw.addr);
-
- /* we shouldn't adjust just one register address */
- xe_gt_assert(gt, reg_udw.addr == reg.addr + 0x4);
+ /*
+ * The two dwords of a 64-bit register can never straddle the offset
+ * adjustment cutoff.
+ */
+ xe_tile_assert(mmio->tile, !in_range(mmio->adj_limit, reg.addr + 1, 7));

- oldudw = xe_mmio_read32(gt, reg_udw);
+ oldudw = xe_mmio_read32(mmio, reg_udw);
for (retries = 5; retries; --retries) {
- ldw = xe_mmio_read32(gt, reg);
- udw = xe_mmio_read32(gt, reg_udw);
+ ldw = xe_mmio_read32(mmio, reg);
+ udw = xe_mmio_read32(mmio, reg_udw);

if (udw == oldudw)
break;
@@ -329,14 +333,14 @@ u64 xe_mmio_read64_2x32(struct xe_gt *gt, struct xe_reg reg)
oldudw = udw;
}

- xe_gt_WARN(gt, retries == 0,
- "64-bit read of %#x did not stabilize\n", reg.addr);
+ drm_WARN(&mmio->tile->xe->drm, retries == 0,
+ "64-bit read of %#x did not stabilize\n", reg.addr);

return (u64)udw << 32 | ldw;
}

-static int __xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 val, u32 timeout_us,
- u32 *out_val, bool atomic, bool expect_match)
+static int ____xe_mmio_wait32(struct xe_mmio *mmio, struct xe_reg reg, u32 mask, u32 val, u32 timeout_us,
+ u32 *out_val, bool atomic, bool expect_match)
{
ktime_t cur = ktime_get_raw();
const ktime_t end = ktime_add_us(cur, timeout_us);
@@ -346,7 +350,7 @@ static int __xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 v
bool check;

for (;;) {
- read = xe_mmio_read32(gt, reg);
+ read = xe_mmio_read32(mmio, reg);

check = (read & mask) == val;
if (!expect_match)
@@ -372,7 +376,7 @@ static int __xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 v
}

if (ret != 0) {
- read = xe_mmio_read32(gt, reg);
+ read = xe_mmio_read32(mmio, reg);

check = (read & mask) == val;
if (!expect_match)
@@ -390,7 +394,7 @@ static int __xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 v

/**
* xe_mmio_wait32() - Wait for a register to match the desired masked value
- * @gt: MMIO target GT
+ * @mmio: MMIO target
* @reg: register to read value from
* @mask: mask to be applied to the value read from the register
* @val: desired value after applying the mask
@@ -407,15 +411,15 @@ static int __xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 v
* @timeout_us for different reasons, specially in non-atomic contexts. Thus,
* it is possible that this function succeeds even after @timeout_us has passed.
*/
-int xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 val, u32 timeout_us,
- u32 *out_val, bool atomic)
+int __xe_mmio_wait32(struct xe_mmio *mmio, struct xe_reg reg, u32 mask, u32 val, u32 timeout_us,
+ u32 *out_val, bool atomic)
{
- return __xe_mmio_wait32(gt, reg, mask, val, timeout_us, out_val, atomic, true);
+ return ____xe_mmio_wait32(mmio, reg, mask, val, timeout_us, out_val, atomic, true);
}

/**
* xe_mmio_wait32_not() - Wait for a register to return anything other than the given masked value
- * @gt: MMIO target GT
+ * @mmio: MMIO target
* @reg: register to read value from
* @mask: mask to be applied to the value read from the register
* @val: value not to be matched after applying the mask
@@ -426,8 +430,8 @@ int xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 val, u32 t
* This function works exactly like xe_mmio_wait32() with the exception that
* @val is expected not to be matched.
*/
-int xe_mmio_wait32_not(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 val, u32 timeout_us,
- u32 *out_val, bool atomic)
+int __xe_mmio_wait32_not(struct xe_mmio *mmio, struct xe_reg reg, u32 mask, u32 val, u32 timeout_us,
+ u32 *out_val, bool atomic)
{
- return __xe_mmio_wait32(gt, reg, mask, val, timeout_us, out_val, atomic, false);
+ return ____xe_mmio_wait32(mmio, reg, mask, val, timeout_us, out_val, atomic, false);
}
diff --git a/drivers/gpu/drm/xe/xe_mmio.h b/drivers/gpu/drm/xe/xe_mmio.h
index 26551410ecc87..ac6846447c52a 100644
--- a/drivers/gpu/drm/xe/xe_mmio.h
+++ b/drivers/gpu/drm/xe/xe_mmio.h
@@ -14,25 +14,67 @@ struct xe_reg;
int xe_mmio_init(struct xe_device *xe);
int xe_mmio_probe_tiles(struct xe_device *xe);

-u8 xe_mmio_read8(struct xe_gt *gt, struct xe_reg reg);
-u16 xe_mmio_read16(struct xe_gt *gt, struct xe_reg reg);
-void xe_mmio_write32(struct xe_gt *gt, struct xe_reg reg, u32 val);
-u32 xe_mmio_read32(struct xe_gt *gt, struct xe_reg reg);
-u32 xe_mmio_rmw32(struct xe_gt *gt, struct xe_reg reg, u32 clr, u32 set);
-int xe_mmio_write32_and_verify(struct xe_gt *gt, struct xe_reg reg, u32 val, u32 mask, u32 eval);
-bool xe_mmio_in_range(const struct xe_gt *gt, const struct xe_mmio_range *range, struct xe_reg reg);
-
-u64 xe_mmio_read64_2x32(struct xe_gt *gt, struct xe_reg reg);
-int xe_mmio_wait32(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 val, u32 timeout_us,
- u32 *out_val, bool atomic);
-int xe_mmio_wait32_not(struct xe_gt *gt, struct xe_reg reg, u32 mask, u32 val, u32 timeout_us,
- u32 *out_val, bool atomic);
-
-static inline u32 xe_mmio_adjusted_addr(const struct xe_gt *gt, u32 addr)
+/*
+ * Temporary transition helper for xe_gt -> xe_mmio conversion. Allows
+ * continued usage of xe_gt as a parameter to MMIO operations which now
+ * take an xe_mmio structure instead. Will be removed once the driver-wide
+ * conversion is complete.
+ */
+#define __to_xe_mmio(ptr) \
+ _Generic(ptr, \
+ const struct xe_gt *: (&((const struct xe_gt *)(ptr))->mmio), \
+ struct xe_gt *: (&((struct xe_gt *)(ptr))->mmio), \
+ const struct xe_mmio *: (ptr), \
+ struct xe_mmio *: (ptr))
+
+u8 __xe_mmio_read8(struct xe_mmio *mmio, struct xe_reg reg);
+#define xe_mmio_read8(p, reg) __xe_mmio_read8(__to_xe_mmio(p), reg)
+
+u16 __xe_mmio_read16(struct xe_mmio *mmio, struct xe_reg reg);
+#define xe_mmio_read16(p, reg) __xe_mmio_read16(__to_xe_mmio(p), reg)
+
+void __xe_mmio_write32(struct xe_mmio *mmio, struct xe_reg reg, u32 val);
+#define xe_mmio_write32(p, reg, val) __xe_mmio_write32(__to_xe_mmio(p), reg, val)
+
+u32 __xe_mmio_read32(struct xe_mmio *mmio, struct xe_reg reg);
+#define xe_mmio_read32(p, reg) __xe_mmio_read32(__to_xe_mmio(p), reg)
+
+u32 __xe_mmio_rmw32(struct xe_mmio *mmio, struct xe_reg reg, u32 clr, u32 set);
+#define xe_mmio_rmw32(p, reg, clr, set) __xe_mmio_rmw32(__to_xe_mmio(p), reg, clr, set)
+
+int __xe_mmio_write32_and_verify(struct xe_mmio *mmio, struct xe_reg reg,
+ u32 val, u32 mask, u32 eval);
+#define xe_mmio_write32_and_verify(p, reg, val, mask, eval) \
+ __xe_mmio_write32_and_verify(__to_xe_mmio(p), reg, val, mask, eval)
+
+bool __xe_mmio_in_range(const struct xe_mmio *mmio,
+ const struct xe_mmio_range *range, struct xe_reg reg);
+#define xe_mmio_in_range(p, range, reg) __xe_mmio_in_range(__to_xe_mmio(p), range, reg)
+
+u64 __xe_mmio_read64_2x32(struct xe_mmio *mmio, struct xe_reg reg);
+#define xe_mmio_read64_2x32(p, reg) __xe_mmio_read64_2x32(__to_xe_mmio(p), reg)
+
+int __xe_mmio_wait32(struct xe_mmio *mmio, struct xe_reg reg, u32 mask, u32 val,
+ u32 timeout_us, u32 *out_val, bool atomic);
+#define xe_mmio_wait32(p, reg, mask, val, timeout_us, out_val, atomic) \
+ __xe_mmio_wait32(__to_xe_mmio(p), reg, mask, val, timeout_us, out_val, atomic)
+
+int __xe_mmio_wait32_not(struct xe_mmio *mmio, struct xe_reg reg, u32 mask,
+ u32 val, u32 timeout_us, u32 *out_val, bool atomic);
+#define xe_mmio_wait32_not(p, reg, mask, val, timeout_us, out_val, atomic) \
+ __xe_mmio_wait32_not(__to_xe_mmio(p), reg, mask, val, timeout_us, out_val, atomic)
+
+static inline u32 __xe_mmio_adjusted_addr(const struct xe_mmio *mmio, u32 addr)
{
- if (addr < gt->mmio.adj_limit)
- addr += gt->mmio.adj_offset;
+ if (addr < mmio->adj_limit)
+ addr += mmio->adj_offset;
return addr;
}
+#define xe_mmio_adjusted_addr(p, addr) __xe_mmio_adjusted_addr(__to_xe_mmio(p), addr)
+
+static inline struct xe_mmio *xe_root_tile_mmio(struct xe_device *xe)
+{
+ return &xe->tiles[0].mmio;
+}

#endif
diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
index da09c26249f5f..9d807d9a0807a 100644
--- a/drivers/gpu/drm/xe/xe_pci.c
+++ b/drivers/gpu/drm/xe/xe_pci.c
@@ -708,6 +708,12 @@ static int xe_info_init(struct xe_device *xe,
gt->info.type = XE_GT_TYPE_MAIN;
gt->info.has_indirect_ring_state = graphics_desc->has_indirect_ring_state;
gt->info.engine_mask = graphics_desc->hw_engine_mask;
+ gt->mmio.regs = tile->mmio.regs;
+ gt->mmio.regs_size = tile->mmio.regs_size;
+ gt->mmio.tile = tile;
+ if (IS_SRIOV_VF(xe))
+ gt->mmio.sriov_vf_gt = gt;
+
if (MEDIA_VER(xe) < 13 && media_desc)
gt->info.engine_mask |= media_desc->hw_engine_mask;

@@ -726,8 +732,13 @@ static int xe_info_init(struct xe_device *xe,
gt->info.type = XE_GT_TYPE_MEDIA;
gt->info.has_indirect_ring_state = media_desc->has_indirect_ring_state;
gt->info.engine_mask = media_desc->hw_engine_mask;
+ gt->mmio.regs = tile->mmio.regs;
+ gt->mmio.regs_size = tile->mmio.regs_size;
gt->mmio.adj_offset = MEDIA_GT_GSI_OFFSET;
gt->mmio.adj_limit = MEDIA_GT_GSI_LENGTH;
+ gt->mmio.tile = tile;
+ if (IS_SRIOV_VF(xe))
+ gt->mmio.sriov_vf_gt = gt;

/*
* FIXME: At the moment multi-tile and standalone media are
diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
index 52969c0909659..d3773a9853872 100644
--- a/drivers/gpu/drm/xe/xe_reg_sr.c
+++ b/drivers/gpu/drm/xe/xe_reg_sr.c
@@ -15,6 +15,7 @@

#include "regs/xe_engine_regs.h"
#include "regs/xe_gt_regs.h"
+#include "xe_device.h"
#include "xe_device_types.h"
#include "xe_force_wake.h"
#include "xe_gt.h"
@@ -175,14 +176,14 @@ void xe_reg_sr_apply_mmio(struct xe_reg_sr *sr, struct xe_gt *gt)

xe_gt_dbg(gt, "Applying %s save-restore MMIOs\n", sr->name);

- err = xe_force_wake_get(&gt->mmio.fw, XE_FORCEWAKE_ALL);
+ err = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
if (err)
goto err_force_wake;

xa_for_each(&sr->xa, reg, entry)
apply_one_mmio(gt, entry);

- err = xe_force_wake_put(&gt->mmio.fw, XE_FORCEWAKE_ALL);
+ err = xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
XE_WARN_ON(err);

return;
@@ -208,7 +209,7 @@ void xe_reg_sr_apply_whitelist(struct xe_hw_engine *hwe)

drm_dbg(&xe->drm, "Whitelisting %s registers\n", sr->name);

- err = xe_force_wake_get(&gt->mmio.fw, XE_FORCEWAKE_ALL);
+ err = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL);
if (err)
goto err_force_wake;

@@ -234,7 +235,7 @@ void xe_reg_sr_apply_whitelist(struct xe_hw_engine *hwe)
xe_mmio_write32(gt, RING_FORCE_TO_NONPRIV(mmio_base, slot), addr);
}

- err = xe_force_wake_put(&gt->mmio.fw, XE_FORCEWAKE_ALL);
+ err = xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL);
XE_WARN_ON(err);

return;
diff --git a/drivers/gpu/drm/xe/xe_trace.h b/drivers/gpu/drm/xe/xe_trace.h
index 8573d7a87d840..91130ad8999cd 100644
--- a/drivers/gpu/drm/xe/xe_trace.h
+++ b/drivers/gpu/drm/xe/xe_trace.h
@@ -21,6 +21,7 @@
#include "xe_vm.h"

#define __dev_name_xe(xe) dev_name((xe)->drm.dev)
+#define __dev_name_tile(tile) __dev_name_xe(tile_to_xe((tile)))
#define __dev_name_gt(gt) __dev_name_xe(gt_to_xe((gt)))
#define __dev_name_eq(q) __dev_name_gt((q)->gt)

@@ -342,12 +343,12 @@ DEFINE_EVENT(xe_hw_fence, xe_hw_fence_try_signal,
);

TRACE_EVENT(xe_reg_rw,
- TP_PROTO(struct xe_gt *gt, bool write, u32 reg, u64 val, int len),
+ TP_PROTO(struct xe_mmio *mmio, bool write, u32 reg, u64 val, int len),

- TP_ARGS(gt, write, reg, val, len),
+ TP_ARGS(mmio, write, reg, val, len),

TP_STRUCT__entry(
- __string(dev, __dev_name_gt(gt))
+ __string(dev, __dev_name_tile(mmio->tile))
__field(u64, val)
__field(u32, reg)
__field(u16, write)
diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
index aea6034a81079..a2af1a44b0e08 100644
--- a/drivers/gpu/drm/xe/xe_wa.c
+++ b/drivers/gpu/drm/xe/xe_wa.c
@@ -492,10 +492,6 @@ static const struct xe_rtp_entry_sr engine_was[] = {
XE_RTP_RULES(GRAPHICS_VERSION(2004), FUNC(xe_rtp_match_first_render_or_compute)),
XE_RTP_ACTIONS(SET(LSC_CHICKEN_BIT_0_UDW, ENABLE_SMP_LD_RENDER_SURFACE_CONTROL))
},
- { XE_RTP_NAME("16018737384"),
- XE_RTP_RULES(GRAPHICS_VERSION(2004), FUNC(xe_rtp_match_first_render_or_compute)),
- XE_RTP_ACTIONS(SET(ROW_CHICKEN, EARLY_EOT_DIS))
- },
/*
* These two workarounds are the same, just applying to different
* engines. Although Wa_18032095049 (for the RCS) isn't required on
@@ -522,31 +518,28 @@ static const struct xe_rtp_entry_sr engine_was[] = {
/* Xe2_HPG */

{ XE_RTP_NAME("16018712365"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_first_render_or_compute)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002),
+ FUNC(xe_rtp_match_first_render_or_compute)),
XE_RTP_ACTIONS(SET(LSC_CHICKEN_BIT_0_UDW, XE2_ALLOC_DPA_STARVE_FIX_DIS))
},
{ XE_RTP_NAME("16018737384"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_first_render_or_compute)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, XE_RTP_END_VERSION_UNDEFINED),
+ FUNC(xe_rtp_match_first_render_or_compute)),
XE_RTP_ACTIONS(SET(ROW_CHICKEN, EARLY_EOT_DIS))
},
- { XE_RTP_NAME("14019988906"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_first_render_or_compute)),
- XE_RTP_ACTIONS(SET(XEHP_PSS_CHICKEN, FLSH_IGNORES_PSD))
- },
- { XE_RTP_NAME("14019877138"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_first_render_or_compute)),
- XE_RTP_ACTIONS(SET(XEHP_PSS_CHICKEN, FD_END_COLLECT))
- },
{ XE_RTP_NAME("14020338487"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_first_render_or_compute)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002),
+ FUNC(xe_rtp_match_first_render_or_compute)),
XE_RTP_ACTIONS(SET(ROW_CHICKEN3, XE2_EUPEND_CHK_FLUSH_DIS))
},
{ XE_RTP_NAME("18032247524"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_first_render_or_compute)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002),
+ FUNC(xe_rtp_match_first_render_or_compute)),
XE_RTP_ACTIONS(SET(LSC_CHICKEN_BIT_0, SEQUENTIAL_ACCESS_UPGRADE_DISABLE))
},
{ XE_RTP_NAME("14018471104"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_first_render_or_compute)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002),
+ FUNC(xe_rtp_match_first_render_or_compute)),
XE_RTP_ACTIONS(SET(LSC_CHICKEN_BIT_0_UDW, ENABLE_SMP_LD_RENDER_SURFACE_CONTROL))
},
/*
@@ -555,7 +548,7 @@ static const struct xe_rtp_entry_sr engine_was[] = {
* apply this to all engines for simplicity.
*/
{ XE_RTP_NAME("16021639441"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002)),
XE_RTP_ACTIONS(SET(CSFE_CHICKEN1(0),
GHWSP_CSB_REPORT_DIS |
PPHWSP_CSB_AND_TIMESTAMP_REPORT_DIS,
@@ -567,11 +560,12 @@ static const struct xe_rtp_entry_sr engine_was[] = {
XE_RTP_ACTIONS(SET(LSC_CHICKEN_BIT_0, WR_REQ_CHAINING_DIS))
},
{ XE_RTP_NAME("14021402888"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002), ENGINE_CLASS(RENDER)),
XE_RTP_ACTIONS(SET(HALF_SLICE_CHICKEN7, CLEAR_OPTIMIZATION_DISABLE))
},
- { XE_RTP_NAME("14021821874"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), FUNC(xe_rtp_match_first_render_or_compute)),
+ { XE_RTP_NAME("14021821874, 14022954250"),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002),
+ FUNC(xe_rtp_match_first_render_or_compute)),
XE_RTP_ACTIONS(SET(TDL_TSL_CHICKEN, STK_ID_RESTRICT))
},

@@ -730,7 +724,7 @@ static const struct xe_rtp_entry_sr lrc_was[] = {
XE_RTP_ACTIONS(SET(INSTPM(RENDER_RING_BASE), ENABLE_SEMAPHORE_POLL_BIT))
},
{ XE_RTP_NAME("18033852989"),
- XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2004), ENGINE_CLASS(RENDER)),
+ XE_RTP_RULES(GRAPHICS_VERSION(2004), ENGINE_CLASS(RENDER)),
XE_RTP_ACTIONS(SET(COMMON_SLICE_CHICKEN1, DISABLE_BOTTOM_CLIP_RECTANGLE_TEST))
},
{ XE_RTP_NAME("14021567978"),
@@ -763,13 +757,21 @@ static const struct xe_rtp_entry_sr lrc_was[] = {
XE_RTP_ACTIONS(SET(CHICKEN_RASTER_1, DIS_SF_ROUND_NEAREST_EVEN))
},
{ XE_RTP_NAME("14019386621"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002), ENGINE_CLASS(RENDER)),
XE_RTP_ACTIONS(SET(VF_SCRATCHPAD, XE2_VFG_TED_CREDIT_INTERFACE_DISABLE))
},
{ XE_RTP_NAME("14020756599"),
XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
XE_RTP_ACTIONS(SET(WM_CHICKEN3, HIZ_PLANE_COMPRESSION_DIS))
},
+ { XE_RTP_NAME("14019988906"),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002), ENGINE_CLASS(RENDER)),
+ XE_RTP_ACTIONS(SET(XEHP_PSS_CHICKEN, FLSH_IGNORES_PSD))
+ },
+ { XE_RTP_NAME("14019877138"),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002), ENGINE_CLASS(RENDER)),
+ XE_RTP_ACTIONS(SET(XEHP_PSS_CHICKEN, FD_END_COLLECT))
+ },
{ XE_RTP_NAME("14021490052"),
XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
XE_RTP_ACTIONS(SET(FF_MODE,
@@ -780,13 +782,17 @@ static const struct xe_rtp_entry_sr lrc_was[] = {
DIS_AUTOSTRIP))
},
{ XE_RTP_NAME("15016589081"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002), ENGINE_CLASS(RENDER)),
XE_RTP_ACTIONS(SET(CHICKEN_RASTER_1, DIS_CLIP_NEGATIVE_BOUNDING_BOX))
},
{ XE_RTP_NAME("22021007897"),
- XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(2001, 2002), ENGINE_CLASS(RENDER)),
XE_RTP_ACTIONS(SET(COMMON_SLICE_CHICKEN4, SBE_PUSH_CONSTANT_BEHIND_FIX_ENABLE))
},
+ { XE_RTP_NAME("18033852989"),
+ XE_RTP_RULES(GRAPHICS_VERSION(2001), ENGINE_CLASS(RENDER)),
+ XE_RTP_ACTIONS(SET(COMMON_SLICE_CHICKEN1, DISABLE_BOTTOM_CLIP_RECTANGLE_TEST))
+ },

/* Xe3_LPG */
{ XE_RTP_NAME("14021490052"),
diff --git a/drivers/gpu/drm/xe/xe_wa_oob.rules b/drivers/gpu/drm/xe/xe_wa_oob.rules
index 93fa2708ee378..97d14b2119ecf 100644
--- a/drivers/gpu/drm/xe/xe_wa_oob.rules
+++ b/drivers/gpu/drm/xe/xe_wa_oob.rules
@@ -27,10 +27,11 @@
16022287689 GRAPHICS_VERSION(2001)
GRAPHICS_VERSION(2004)
13011645652 GRAPHICS_VERSION(2004)
-14022293748 GRAPHICS_VERSION(2001)
+ GRAPHICS_VERSION(3001)
+14022293748 GRAPHICS_VERSION_RANGE(2001, 2002)
GRAPHICS_VERSION(2004)
GRAPHICS_VERSION_RANGE(3000, 3001)
-22019794406 GRAPHICS_VERSION(2001)
+22019794406 GRAPHICS_VERSION_RANGE(2001, 2002)
GRAPHICS_VERSION(2004)
GRAPHICS_VERSION_RANGE(3000, 3001)
22019338487 MEDIA_VERSION(2000)
diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
index 95a4ede270991..f283f271d87e7 100644
--- a/drivers/hid/Kconfig
+++ b/drivers/hid/Kconfig
@@ -328,6 +328,7 @@ config HID_ELECOM
- EX-G Trackballs (M-XT3DRBK, M-XT3URBK)
- DEFT Trackballs (M-DT1DRBK, M-DT1URBK, M-DT2DRBK, M-DT2URBK)
- HUGE Trackballs (M-HT1DRBK, M-HT1URBK)
+ - HUGE Plus Trackball (M-HT1MRBK)

config HID_ELO
tristate "ELO USB 4000/4500 touchscreen"
diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
index a0f79803df3bd..b0cf13cd292bd 100644
--- a/drivers/hid/hid-apple.c
+++ b/drivers/hid/hid-apple.c
@@ -353,6 +353,7 @@ static const struct apple_key_translation swapped_fn_leftctrl_keys[] = {
};

static const struct apple_non_apple_keyboard non_apple_keyboards[] = {
+ { "SONiX KN85 Keyboard" },
{ "SONiX USB DEVICE" },
{ "SONiX AK870 PRO" },
{ "Keychron" },
diff --git a/drivers/hid/hid-elecom.c b/drivers/hid/hid-elecom.c
index b57b6deb36f42..f1e3fc374a1d5 100644
--- a/drivers/hid/hid-elecom.c
+++ b/drivers/hid/hid-elecom.c
@@ -5,6 +5,7 @@
* - EX-G Trackballs (M-XT3DRBK, M-XT3URBK, M-XT4DRBK)
* - DEFT Trackballs (M-DT1DRBK, M-DT1URBK, M-DT2DRBK, M-DT2URBK)
* - HUGE Trackballs (M-HT1DRBK, M-HT1URBK)
+ * - HUGE Plus Trackball (M-HT1MRBK)
*
* Copyright (c) 2010 Richard Nauber <Richard.Nauber@xxxxxxxxx>
* Copyright (c) 2016 Yuxuan Shui <yshuiv7@xxxxxxxxx>
@@ -111,12 +112,25 @@ static const __u8 *elecom_report_fixup(struct hid_device *hdev, __u8 *rdesc,
*/
mouse_button_fixup(hdev, rdesc, *rsize, 22, 30, 24, 16, 8);
break;
+ case USB_DEVICE_ID_ELECOM_M_HT1MRBK:
+ case USB_DEVICE_ID_ELECOM_M_HT1MRBK_01AB:
+ case USB_DEVICE_ID_ELECOM_M_HT1MRBK_01AC:
+ /*
+ * Report descriptor format:
+ * 24: button bit count
+ * 28: padding bit count
+ * 22: button report size
+ * 16: button usage maximum
+ */
+ mouse_button_fixup(hdev, rdesc, *rsize, 24, 28, 22, 16, 8);
+ break;
}
return rdesc;
}

static const struct hid_device_id elecom_devices[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_BM084) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1MRBK_01AC) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XGL20DLBK) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_018F) },
@@ -127,6 +141,8 @@ static const struct hid_device_id elecom_devices[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1MRBK) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1MRBK_01AB) },
{ }
};
MODULE_DEVICE_TABLE(hid, elecom_devices);
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
index 9d0a97a3b06a2..dfa39a37405e3 100644
--- a/drivers/hid/hid-ids.h
+++ b/drivers/hid/hid-ids.h
@@ -433,6 +433,7 @@
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7349 0x7349
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_73F7 0x73f7
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001 0xa001
+#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C000 0xc000
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002 0xc002

#define USB_VENDOR_ID_EDIFIER 0x2d99
@@ -458,6 +459,9 @@
#define USB_DEVICE_ID_ELECOM_M_HT1URBK 0x010c
#define USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D 0x010d
#define USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C 0x011c
+#define USB_DEVICE_ID_ELECOM_M_HT1MRBK 0x01aa
+#define USB_DEVICE_ID_ELECOM_M_HT1MRBK_01AB 0x01ab
+#define USB_DEVICE_ID_ELECOM_M_HT1MRBK_01AC 0x01ac

#define USB_VENDOR_ID_DREAM_CHEEKY 0x1d34
#define USB_DEVICE_ID_DREAM_CHEEKY_WN 0x0004
diff --git a/drivers/hid/hid-logitech-hidpp.c b/drivers/hid/hid-logitech-hidpp.c
index c470b4f0e9211..492a02ca80594 100644
--- a/drivers/hid/hid-logitech-hidpp.c
+++ b/drivers/hid/hid-logitech-hidpp.c
@@ -4341,7 +4341,7 @@ static int hidpp_get_report_length(struct hid_device *hdev, int id)

re = &(hdev->report_enum[HID_OUTPUT_REPORT]);
report = re->report_id_hash[id];
- if (!report)
+ if (!report || !report->maxfield)
return 0;

return report->field[0]->report_count + 1;
@@ -4693,6 +4693,8 @@ static const struct hid_device_id hidpp_devices[] = {
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb037) },
{ /* MX Anywhere 3SB mouse over Bluetooth */
HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb038) },
+ { /* Slim Solar+ K980 Keyboard over Bluetooth */
+ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LOGITECH, 0xb391) },
{}
};

diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c
index 542b3e86d56f4..19c811f9ae074 100644
--- a/drivers/hid/hid-magicmouse.c
+++ b/drivers/hid/hid-magicmouse.c
@@ -715,6 +715,11 @@ static int magicmouse_input_configured(struct hid_device *hdev,
struct magicmouse_sc *msc = hid_get_drvdata(hdev);
int ret;

+ if (!msc->input) {
+ hid_err(hdev, "magicmouse setup input failed (no input)");
+ return -EINVAL;
+ }
+
ret = magicmouse_setup_input(msc->input, hdev);
if (ret) {
hid_err(hdev, "magicmouse setup input failed (%d)\n", ret);
diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
index fcfc508d1b54d..c3a914458358c 100644
--- a/drivers/hid/hid-multitouch.c
+++ b/drivers/hid/hid-multitouch.c
@@ -2026,6 +2026,9 @@ static const struct hid_device_id mt_devices[] = {
{ .driver_data = MT_CLS_EGALAX_SERIAL,
MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001) },
+ { .driver_data = MT_CLS_EGALAX_SERIAL,
+ MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
+ USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C000) },
{ .driver_data = MT_CLS_EGALAX,
MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002) },
diff --git a/drivers/hid/hid-pl.c b/drivers/hid/hid-pl.c
index 3c8827081deae..dc11d5322fc0f 100644
--- a/drivers/hid/hid-pl.c
+++ b/drivers/hid/hid-pl.c
@@ -194,9 +194,14 @@ static int pl_probe(struct hid_device *hdev, const struct hid_device_id *id)
goto err;
}

- plff_init(hdev);
+ ret = plff_init(hdev);
+ if (ret)
+ goto stop;

return 0;
+
+stop:
+ hid_hw_stop(hdev);
err:
return ret;
}
diff --git a/drivers/hid/hid-playstation.c b/drivers/hid/hid-playstation.c
index 71a8d4ec9913b..b13a8f27cda0c 100644
--- a/drivers/hid/hid-playstation.c
+++ b/drivers/hid/hid-playstation.c
@@ -739,7 +739,9 @@ static struct input_dev *ps_gamepad_create(struct hid_device *hdev,
#if IS_ENABLED(CONFIG_PLAYSTATION_FF)
if (play_effect) {
input_set_capability(gamepad, EV_FF, FF_RUMBLE);
- input_ff_create_memless(gamepad, NULL, play_effect);
+ ret = input_ff_create_memless(gamepad, NULL, play_effect);
+ if (ret)
+ return ERR_PTR(ret);
}
#endif

diff --git a/drivers/hid/hid-prodikeys.c b/drivers/hid/hid-prodikeys.c
index 3d08c190a935b..af7366b37c07b 100644
--- a/drivers/hid/hid-prodikeys.c
+++ b/drivers/hid/hid-prodikeys.c
@@ -378,6 +378,10 @@ static int pcmidi_handle_report4(struct pcmidi_snd *pm, u8 *data)
bit_mask = (bit_mask << 8) | data[2];
bit_mask = (bit_mask << 8) | data[3];

+ /* robustness in case input_mapping hook does not get called */
+ if (!pm->input_ep82)
+ return 0;
+
/* break keys */
for (bit_index = 0; bit_index < 24; bit_index++) {
if (!((0x01 << bit_index) & bit_mask)) {
diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
index 1f531626192cd..7a3e0675d9ba2 100644
--- a/drivers/hid/hid-quirks.c
+++ b/drivers/hid/hid-quirks.c
@@ -415,6 +415,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
#if IS_ENABLED(CONFIG_HID_ELECOM)
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_BM084) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XGL20DLBK) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1MRBK_01AC) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_00FB) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3URBK_018F) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_XT3DRBK) },
@@ -424,6 +425,8 @@ static const struct hid_device_id hid_have_special_driver[] = {
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1URBK) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_010D) },
{ HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1DRBK_011C) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1MRBK) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ELECOM, USB_DEVICE_ID_ELECOM_M_HT1MRBK_01AB) },
#endif
#if IS_ENABLED(CONFIG_HID_ELO)
{ HID_USB_DEVICE(USB_VENDOR_ID_ELO, 0x0009) },
diff --git a/drivers/hid/i2c-hid/i2c-hid-of-elan.c b/drivers/hid/i2c-hid/i2c-hid-of-elan.c
index 3fcff6daa0d3a..f215bad8b3ab1 100644
--- a/drivers/hid/i2c-hid/i2c-hid-of-elan.c
+++ b/drivers/hid/i2c-hid/i2c-hid-of-elan.c
@@ -159,6 +159,13 @@ static const struct elan_i2c_hid_chip_data elan_ekth6a12nay_chip_data = {
.main_supply_name = "vcc33",
};

+static const struct elan_i2c_hid_chip_data focaltech_ft8112_chip_data = {
+ .post_power_delay_ms = 10,
+ .post_gpio_reset_on_delay_ms = 150,
+ .hid_descriptor_address = 0x0001,
+ .main_supply_name = "vcc33",
+};
+
static const struct elan_i2c_hid_chip_data ilitek_ili9882t_chip_data = {
.post_power_delay_ms = 1,
.post_gpio_reset_on_delay_ms = 200,
@@ -182,6 +189,7 @@ static const struct elan_i2c_hid_chip_data ilitek_ili2901_chip_data = {
static const struct of_device_id elan_i2c_hid_of_match[] = {
{ .compatible = "elan,ekth6915", .data = &elan_ekth6915_chip_data },
{ .compatible = "elan,ekth6a12nay", .data = &elan_ekth6a12nay_chip_data },
+ { .compatible = "focaltech,ft8112", .data = &focaltech_ft8112_chip_data },
{ .compatible = "ilitek,ili9882t", .data = &ilitek_ili9882t_chip_data },
{ .compatible = "ilitek,ili2901", .data = &ilitek_ili2901_chip_data },
{ }
diff --git a/drivers/hid/intel-ish-hid/ishtp/bus.c b/drivers/hid/intel-ish-hid/ishtp/bus.c
index fddc1c4b6cedb..03c68fe40925b 100644
--- a/drivers/hid/intel-ish-hid/ishtp/bus.c
+++ b/drivers/hid/intel-ish-hid/ishtp/bus.c
@@ -730,7 +730,7 @@ void ishtp_bus_remove_all_clients(struct ishtp_device *ishtp_dev,
spin_lock_irqsave(&ishtp_dev->cl_list_lock, flags);
list_for_each_entry(cl, &ishtp_dev->cl_list, link) {
cl->state = ISHTP_CL_DISCONNECTED;
- if (warm_reset && cl->device->reference_count)
+ if (warm_reset && cl->device && cl->device->reference_count)
continue;

/*
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 616e63fb2f151..128d732665bf4 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -25,6 +25,7 @@
#include <linux/cpu.h>
#include <linux/sched/isolation.h>
#include <linux/sched/task_stack.h>
+#include <linux/smpboot.h>

#include <linux/delay.h>
#include <linux/panic_notifier.h>
@@ -1276,7 +1277,7 @@ static void vmbus_chan_sched(struct hv_per_cpu_context *hv_cpu)
}
}

-static void vmbus_isr(void)
+static void __vmbus_isr(void)
{
struct hv_per_cpu_context *hv_cpu
= this_cpu_ptr(hv_context.cpu_context);
@@ -1300,6 +1301,53 @@ static void vmbus_isr(void)
add_interrupt_randomness(vmbus_interrupt);
}

+static DEFINE_PER_CPU(bool, vmbus_irq_pending);
+static DEFINE_PER_CPU(struct task_struct *, vmbus_irqd);
+
+static void vmbus_irqd_wake(void)
+{
+ struct task_struct *tsk = __this_cpu_read(vmbus_irqd);
+
+ __this_cpu_write(vmbus_irq_pending, true);
+ wake_up_process(tsk);
+}
+
+static void vmbus_irqd_setup(unsigned int cpu)
+{
+ sched_set_fifo(current);
+}
+
+static int vmbus_irqd_should_run(unsigned int cpu)
+{
+ return __this_cpu_read(vmbus_irq_pending);
+}
+
+static void run_vmbus_irqd(unsigned int cpu)
+{
+ __this_cpu_write(vmbus_irq_pending, false);
+ __vmbus_isr();
+}
+
+static bool vmbus_irq_initialized;
+
+static struct smp_hotplug_thread vmbus_irq_threads = {
+ .store = &vmbus_irqd,
+ .setup = vmbus_irqd_setup,
+ .thread_should_run = vmbus_irqd_should_run,
+ .thread_fn = run_vmbus_irqd,
+ .thread_comm = "vmbus_irq/%u",
+};
+
+static void vmbus_isr(void)
+{
+ if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
+ vmbus_irqd_wake();
+ } else {
+ lockdep_hardirq_threaded();
+ __vmbus_isr();
+ }
+}
+
static irqreturn_t vmbus_percpu_isr(int irq, void *dev_id)
{
vmbus_isr();
@@ -1345,6 +1393,13 @@ static int vmbus_bus_init(void)
* the VMbus interrupt handler.
*/

+ if (IS_ENABLED(CONFIG_PREEMPT_RT) && !vmbus_irq_initialized) {
+ ret = smpboot_register_percpu_thread(&vmbus_irq_threads);
+ if (ret)
+ goto err_kthread;
+ vmbus_irq_initialized = true;
+ }
+
if (vmbus_irq == -1) {
hv_setup_vmbus_handler(vmbus_isr);
} else {
@@ -1419,6 +1474,11 @@ static int vmbus_bus_init(void)
free_percpu(vmbus_evt);
}
err_setup:
+ if (IS_ENABLED(CONFIG_PREEMPT_RT) && vmbus_irq_initialized) {
+ smpboot_unregister_percpu_thread(&vmbus_irq_threads);
+ vmbus_irq_initialized = false;
+ }
+err_kthread:
bus_unregister(&hv_bus);
return ret;
}
@@ -2818,6 +2878,10 @@ static void __exit vmbus_exit(void)
free_percpu_irq(vmbus_irq, vmbus_evt);
free_percpu(vmbus_evt);
}
+ if (IS_ENABLED(CONFIG_PREEMPT_RT) && vmbus_irq_initialized) {
+ smpboot_unregister_percpu_thread(&vmbus_irq_threads);
+ vmbus_irq_initialized = false;
+ }
for_each_online_cpu(cpu) {
struct hv_per_cpu_context *hv_cpu
= per_cpu_ptr(hv_context.cpu_context, cpu);
diff --git a/drivers/hwmon/dell-smm-hwmon.c b/drivers/hwmon/dell-smm-hwmon.c
index 9df78861f5f88..2cb92a53e7c2c 100644
--- a/drivers/hwmon/dell-smm-hwmon.c
+++ b/drivers/hwmon/dell-smm-hwmon.c
@@ -1275,6 +1275,13 @@ static const struct dmi_system_id i8k_dmi_table[] __initconst = {
DMI_MATCH(DMI_PRODUCT_NAME, "MP061"),
},
},
+ {
+ .ident = "Dell OptiPlex 7080",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "OptiPlex 7080"),
+ },
+ },
{
.ident = "Dell OptiPlex 7060",
.matches = {
diff --git a/drivers/hwmon/f71882fg.c b/drivers/hwmon/f71882fg.c
index 7c941d320a186..734df959276af 100644
--- a/drivers/hwmon/f71882fg.c
+++ b/drivers/hwmon/f71882fg.c
@@ -51,6 +51,7 @@
#define SIO_F81866_ID 0x1010 /* Chipset ID */
#define SIO_F71858AD_ID 0x0903 /* Chipset ID */
#define SIO_F81966_ID 0x1502 /* Chipset ID */
+#define SIO_F81968_ID 0x1806 /* Chipset ID */

#define REGION_LENGTH 8
#define ADDR_REG_OFFSET 5
@@ -2570,6 +2571,7 @@ static int __init f71882fg_find(int sioaddr, struct f71882fg_sio_data *sio_data)
break;
case SIO_F81866_ID:
case SIO_F81966_ID:
+ case SIO_F81968_ID:
sio_data->type = f81866a;
break;
default:
@@ -2599,9 +2601,9 @@ static int __init f71882fg_find(int sioaddr, struct f71882fg_sio_data *sio_data)
address &= ~(REGION_LENGTH - 1); /* Ignore 3 LSB */

err = address;
- pr_info("Found %s chip at %#x, revision %d\n",
+ pr_info("Found %s chip at %#x, revision %d, devid: %04x\n",
f71882fg_names[sio_data->type], (unsigned int)address,
- (int)superio_inb(sioaddr, SIO_REG_DEVREV));
+ (int)superio_inb(sioaddr, SIO_REG_DEVREV), devid);
exit:
superio_exit(sioaddr);
return err;
diff --git a/drivers/hwmon/ibmpex.c b/drivers/hwmon/ibmpex.c
index 129f3a9e8fe96..228c5f6c6f383 100644
--- a/drivers/hwmon/ibmpex.c
+++ b/drivers/hwmon/ibmpex.c
@@ -277,9 +277,6 @@ static ssize_t ibmpex_high_low_store(struct device *dev,
{
struct ibmpex_bmc_data *data = dev_get_drvdata(dev);

- if (!data)
- return -ENODEV;
-
ibmpex_reset_high_low_data(data);

return count;
@@ -511,9 +508,6 @@ static void ibmpex_bmc_delete(struct ibmpex_bmc_data *data)
{
int i, j;

- hwmon_device_unregister(data->hwmon_dev);
- dev_set_drvdata(data->bmc_device, NULL);
-
device_remove_file(data->bmc_device,
&sensor_dev_attr_reset_high_low.dev_attr);
device_remove_file(data->bmc_device, &dev_attr_name.attr);
@@ -527,7 +521,8 @@ static void ibmpex_bmc_delete(struct ibmpex_bmc_data *data)
}

list_del(&data->list);
-
+ dev_set_drvdata(data->bmc_device, NULL);
+ hwmon_device_unregister(data->hwmon_dev);
ipmi_destroy_user(data->user);
kfree(data->sensors);
kfree(data);
diff --git a/drivers/hwmon/nct6775-platform.c b/drivers/hwmon/nct6775-platform.c
index 1218a3b449a80..ece183a0d4e91 100644
--- a/drivers/hwmon/nct6775-platform.c
+++ b/drivers/hwmon/nct6775-platform.c
@@ -1356,6 +1356,7 @@ static const char * const asus_msi_boards[] = {
"Pro WS W680-ACE IPMI",
"Pro WS W790-ACE",
"Pro WS W790E-SAGE SE",
+ "Pro WS WRX90E-SAGE SE",
"ProArt B650-CREATOR",
"ProArt B660-CREATOR D4",
"ProArt B760-CREATOR D4",
diff --git a/drivers/hwmon/pmbus/mpq8785.c b/drivers/hwmon/pmbus/mpq8785.c
index 7f87e117b49de..7c08f64a7f0a9 100644
--- a/drivers/hwmon/pmbus/mpq8785.c
+++ b/drivers/hwmon/pmbus/mpq8785.c
@@ -4,10 +4,21 @@
*/

#include <linux/i2c.h>
+#include <linux/bitops.h>
#include <linux/module.h>
+#include <linux/property.h>
#include <linux/of_device.h>
#include "pmbus.h"

+#define MPM82504_READ_TEMPERATURE_1_SIGN_POS 9
+
+enum chips { mpm82504, mpq8785 };
+
+static u16 voltage_scale_loop_max_val[] = {
+ [mpm82504] = GENMASK(9, 0),
+ [mpq8785] = GENMASK(10, 0),
+};
+
static int mpq8785_identify(struct i2c_client *client,
struct pmbus_driver_info *info)
{
@@ -34,6 +45,47 @@ static int mpq8785_identify(struct i2c_client *client,
return 0;
};

+static int mpq8785_read_byte_data(struct i2c_client *client, int page, int reg)
+{
+ int ret;
+
+ switch (reg) {
+ case PMBUS_VOUT_MODE:
+ ret = pmbus_read_byte_data(client, page, reg);
+ if (ret < 0)
+ return ret;
+
+ if ((ret >> 5) == 1) {
+ /*
+ * The MPQ8785 chip reports VOUT_MODE as VID mode, but the driver
+ * treats VID as direct mode. Without this, identification would fail
+ * due to mode mismatch.
+ * This override ensures the reported mode matches the driver
+ * configuration, allowing successful initialization.
+ */
+ return PB_VOUT_MODE_DIRECT;
+ }
+
+ return ret;
+ default:
+ return -ENODATA;
+ }
+}
+
+static int mpm82504_read_word_data(struct i2c_client *client, int page,
+ int phase, int reg)
+{
+ int ret;
+
+ ret = pmbus_read_word_data(client, page, phase, reg);
+
+ if (ret < 0 || reg != PMBUS_READ_TEMPERATURE_1)
+ return ret;
+
+ /* Fix PMBUS_READ_TEMPERATURE_1 signedness */
+ return sign_extend32(ret, MPM82504_READ_TEMPERATURE_1_SIGN_POS) & 0xffff;
+}
+
static struct pmbus_driver_info mpq8785_info = {
.pages = 1,
.format[PSC_VOLTAGE_IN] = direct,
@@ -53,26 +105,69 @@ static struct pmbus_driver_info mpq8785_info = {
PMBUS_HAVE_VOUT | PMBUS_HAVE_STATUS_VOUT |
PMBUS_HAVE_IOUT | PMBUS_HAVE_STATUS_IOUT |
PMBUS_HAVE_TEMP | PMBUS_HAVE_STATUS_TEMP,
- .identify = mpq8785_identify,
-};
-
-static int mpq8785_probe(struct i2c_client *client)
-{
- return pmbus_do_probe(client, &mpq8785_info);
};

static const struct i2c_device_id mpq8785_id[] = {
- { "mpq8785" },
+ { "mpm82504", mpm82504 },
+ { "mpq8785", mpq8785 },
{ },
};
MODULE_DEVICE_TABLE(i2c, mpq8785_id);

static const struct of_device_id __maybe_unused mpq8785_of_match[] = {
- { .compatible = "mps,mpq8785" },
+ { .compatible = "mps,mpm82504", .data = (void *)mpm82504 },
+ { .compatible = "mps,mpq8785", .data = (void *)mpq8785 },
{}
};
MODULE_DEVICE_TABLE(of, mpq8785_of_match);

+static int mpq8785_probe(struct i2c_client *client)
+{
+ struct device *dev = &client->dev;
+ struct pmbus_driver_info *info;
+ enum chips chip_id;
+ u32 voltage_scale;
+ int ret;
+
+ info = devm_kmemdup(dev, &mpq8785_info, sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return -ENOMEM;
+
+ if (dev->of_node)
+ chip_id = (kernel_ulong_t)of_device_get_match_data(dev);
+ else
+ chip_id = (kernel_ulong_t)i2c_get_match_data(client);
+
+ switch (chip_id) {
+ case mpm82504:
+ info->format[PSC_VOLTAGE_OUT] = direct;
+ info->m[PSC_VOLTAGE_OUT] = 8;
+ info->b[PSC_VOLTAGE_OUT] = 0;
+ info->R[PSC_VOLTAGE_OUT] = 2;
+ info->read_word_data = mpm82504_read_word_data;
+ break;
+ case mpq8785:
+ info->identify = mpq8785_identify;
+ info->read_byte_data = mpq8785_read_byte_data;
+ break;
+ default:
+ return -ENODEV;
+ }
+
+ if (!device_property_read_u32(dev, "mps,vout-fb-divider-ratio-permille",
+ &voltage_scale)) {
+ if (voltage_scale > voltage_scale_loop_max_val[chip_id])
+ return -EINVAL;
+
+ ret = i2c_smbus_write_word_data(client, PMBUS_VOUT_SCALE_LOOP,
+ voltage_scale);
+ if (ret)
+ return ret;
+ }
+
+ return pmbus_do_probe(client, info);
+};
+
static struct i2c_driver mpq8785_driver = {
.driver = {
.name = "mpq8785",
diff --git a/drivers/hwspinlock/omap_hwspinlock.c b/drivers/hwspinlock/omap_hwspinlock.c
index 27b47b8623c09..2d8de835bc242 100644
--- a/drivers/hwspinlock/omap_hwspinlock.c
+++ b/drivers/hwspinlock/omap_hwspinlock.c
@@ -88,7 +88,9 @@ static int omap_hwspinlock_probe(struct platform_device *pdev)
* make sure the module is enabled and clocked before reading
* the module SYSSTATUS register
*/
- devm_pm_runtime_enable(&pdev->dev);
+ ret = devm_pm_runtime_enable(&pdev->dev);
+ if (ret)
+ return ret;
ret = pm_runtime_resume_and_get(&pdev->dev);
if (ret < 0)
return ret;
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-core.c b/drivers/hwtracing/coresight/coresight-etm3x-core.c
index c103f4c70f5d0..994ae1e08af8b 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-core.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-core.c
@@ -812,16 +812,16 @@ static int __init etm_hp_setup(void)
{
int ret;

- ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ARM_CORESIGHT_STARTING,
- "arm/coresight:starting",
- etm_starting_cpu, etm_dying_cpu);
+ ret = cpuhp_setup_state_nocalls(CPUHP_AP_ARM_CORESIGHT_STARTING,
+ "arm/coresight:starting",
+ etm_starting_cpu, etm_dying_cpu);

if (ret)
return ret;

- ret = cpuhp_setup_state_nocalls_cpuslocked(CPUHP_AP_ONLINE_DYN,
- "arm/coresight:online",
- etm_online_cpu, NULL);
+ ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN,
+ "arm/coresight:online",
+ etm_online_cpu, NULL);

/* HP dyn state ID returned in ret on success */
if (ret > 0) {
diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
index 6eb779affaba8..fe6f956cc3111 100644
--- a/drivers/i3c/master.c
+++ b/drivers/i3c/master.c
@@ -619,7 +619,8 @@ static int i3c_set_hotjoin(struct i3c_master_controller *master, bool enable)
else
ret = master->ops->disable_hotjoin(master);

- master->hotjoin = enable;
+ if (!ret)
+ master->hotjoin = enable;

i3c_bus_normaluse_unlock(&master->bus);

@@ -2810,7 +2811,6 @@ int i3c_master_register(struct i3c_master_controller *master,
INIT_LIST_HEAD(&master->boardinfo.i3c);

device_initialize(&master->dev);
- dev_set_name(&master->dev, "i3c-%d", i3cbus->id);

master->dev.dma_mask = parent->dma_mask;
master->dev.coherent_dma_mask = parent->coherent_dma_mask;
@@ -2820,6 +2820,8 @@ int i3c_master_register(struct i3c_master_controller *master,
if (ret)
goto err_put_dev;

+ dev_set_name(&master->dev, "i3c-%d", i3cbus->id);
+
ret = of_populate_i3c_bus(master);
if (ret)
goto err_put_dev;
diff --git a/drivers/i3c/master/dw-i3c-master.c b/drivers/i3c/master/dw-i3c-master.c
index dbcd3984f2578..4c019c746f231 100644
--- a/drivers/i3c/master/dw-i3c-master.c
+++ b/drivers/i3c/master/dw-i3c-master.c
@@ -1102,6 +1102,7 @@ static int dw_i3c_master_i2c_xfers(struct i2c_dev_desc *dev,
dev_err(master->dev,
"<%s> cannot resume i3c bus master, err: %d\n",
__func__, ret);
+ dw_i3c_master_free_xfer(xfer);
return ret;
}

@@ -1575,6 +1576,8 @@ int dw_i3c_common_probe(struct dw_i3c_master *master,
spin_lock_init(&master->xferqueue.lock);
INIT_LIST_HEAD(&master->xferqueue.list);

+ spin_lock_init(&master->devs_lock);
+
writel(INTR_ALL, master->regs + INTR_STATUS);
irq = platform_get_irq(pdev, 0);
ret = devm_request_irq(&pdev->dev, irq,
diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
index fe955703e59b5..68c2923f55664 100644
--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
+++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
@@ -328,6 +328,14 @@ static int hci_dma_init(struct i3c_hci *hci)
rh_reg_write(INTR_SIGNAL_ENABLE, regval);

ring_ready:
+ /*
+ * The MIPI I3C HCI specification does not document reset values for
+ * RING_OPERATION1 fields and some controllers (e.g. Intel controllers)
+ * do not reset the values, so ensure the ring pointers are set to zero
+ * here.
+ */
+ rh_reg_write(RING_OPERATION1, 0);
+
rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE |
RING_CTRL_RUN_STOP);
}
diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
index 985f30ef0c939..cc13892e45963 100644
--- a/drivers/i3c/master/svc-i3c-master.c
+++ b/drivers/i3c/master/svc-i3c-master.c
@@ -426,8 +426,8 @@ static void svc_i3c_master_ibi_work(struct work_struct *work)
{
struct svc_i3c_master *master = container_of(work, struct svc_i3c_master, ibi_work);
struct svc_i3c_i2c_dev_data *data;
+ struct i3c_dev_desc *dev = NULL;
unsigned int ibitype, ibiaddr;
- struct i3c_dev_desc *dev;
u32 status, val;
int ret;

@@ -511,7 +511,7 @@ static void svc_i3c_master_ibi_work(struct work_struct *work)
* for the slave to interrupt again.
*/
if (svc_i3c_master_error(master)) {
- if (master->ibi.tbq_slot) {
+ if (master->ibi.tbq_slot && dev) {
data = i3c_dev_get_master_data(dev);
i3c_generic_ibi_recycle_slot(data->ibi_pool,
master->ibi.tbq_slot);
diff --git a/drivers/iio/accel/adxl380.c b/drivers/iio/accel/adxl380.c
index 10ed3b04ca307..cfd24ef538a64 100644
--- a/drivers/iio/accel/adxl380.c
+++ b/drivers/iio/accel/adxl380.c
@@ -977,6 +977,7 @@ static irqreturn_t adxl380_irq_handler(int irq, void *p)
if (ret)
return IRQ_HANDLED;

+ fifo_entries = rounddown(fifo_entries, st->fifo_set_size);
for (i = 0; i < fifo_entries; i += st->fifo_set_size) {
ret = regmap_noinc_read(st->regmap, ADXL380_FIFO_DATA,
&st->fifo_buf[i],
diff --git a/drivers/iio/accel/bma180.c b/drivers/iio/accel/bma180.c
index 2445a0f7bc2ba..ff64bfb348ed8 100644
--- a/drivers/iio/accel/bma180.c
+++ b/drivers/iio/accel/bma180.c
@@ -990,8 +990,9 @@ static int bma180_probe(struct i2c_client *client)
}

ret = devm_request_irq(dev, client->irq,
- iio_trigger_generic_data_rdy_poll, IRQF_TRIGGER_RISING,
- "bma180_event", data->trig);
+ iio_trigger_generic_data_rdy_poll,
+ IRQF_TRIGGER_RISING | IRQF_NO_THREAD,
+ "bma180_event", data->trig);
if (ret) {
dev_err(dev, "unable to request IRQ\n");
goto err_trigger_free;
diff --git a/drivers/iio/accel/sca3000.c b/drivers/iio/accel/sca3000.c
index 87c54e41f6ccd..2b87f7f5508bb 100644
--- a/drivers/iio/accel/sca3000.c
+++ b/drivers/iio/accel/sca3000.c
@@ -1496,7 +1496,11 @@ static int sca3000_probe(struct spi_device *spi)
if (ret)
goto error_free_irq;

- return iio_device_register(indio_dev);
+ ret = iio_device_register(indio_dev);
+ if (ret)
+ goto error_free_irq;
+
+ return 0;

error_free_irq:
if (spi->irq)
diff --git a/drivers/iio/adc/ad7766.c b/drivers/iio/adc/ad7766.c
index 4d570383ef025..1e6bfe8765ab3 100644
--- a/drivers/iio/adc/ad7766.c
+++ b/drivers/iio/adc/ad7766.c
@@ -261,7 +261,7 @@ static int ad7766_probe(struct spi_device *spi)
* don't enable the interrupt to avoid extra load on the system
*/
ret = devm_request_irq(&spi->dev, spi->irq, ad7766_irq,
- IRQF_TRIGGER_FALLING | IRQF_NO_AUTOEN,
+ IRQF_TRIGGER_FALLING | IRQF_NO_AUTOEN | IRQF_NO_THREAD,
dev_name(&spi->dev),
ad7766->trig);
if (ret < 0)
diff --git a/drivers/iio/gyro/itg3200_buffer.c b/drivers/iio/gyro/itg3200_buffer.c
index 4cfa0d4395605..d1c125a77308a 100644
--- a/drivers/iio/gyro/itg3200_buffer.c
+++ b/drivers/iio/gyro/itg3200_buffer.c
@@ -118,11 +118,9 @@ int itg3200_probe_trigger(struct iio_dev *indio_dev)
if (!st->trig)
return -ENOMEM;

- ret = request_irq(st->i2c->irq,
- &iio_trigger_generic_data_rdy_poll,
- IRQF_TRIGGER_RISING,
- "itg3200_data_rdy",
- st->trig);
+ ret = request_irq(st->i2c->irq, &iio_trigger_generic_data_rdy_poll,
+ IRQF_TRIGGER_RISING | IRQF_NO_THREAD,
+ "itg3200_data_rdy", st->trig);
if (ret)
goto error_free_trig;

diff --git a/drivers/iio/gyro/itg3200_core.c b/drivers/iio/gyro/itg3200_core.c
index cd8a2dae56cd9..bfe95ec1abda9 100644
--- a/drivers/iio/gyro/itg3200_core.c
+++ b/drivers/iio/gyro/itg3200_core.c
@@ -93,6 +93,8 @@ static int itg3200_read_raw(struct iio_dev *indio_dev,
case IIO_CHAN_INFO_RAW:
reg = (u8)chan->address;
ret = itg3200_read_reg_s16(indio_dev, reg, val);
+ if (ret)
+ return ret;
return IIO_VAL_INT;
case IIO_CHAN_INFO_SCALE:
*val = 0;
diff --git a/drivers/iio/gyro/mpu3050-core.c b/drivers/iio/gyro/mpu3050-core.c
index 35af68b41408f..4dcd0cc545518 100644
--- a/drivers/iio/gyro/mpu3050-core.c
+++ b/drivers/iio/gyro/mpu3050-core.c
@@ -1165,10 +1165,8 @@ int mpu3050_common_probe(struct device *dev,
mpu3050->regs[1].supply = mpu3050_reg_vlogic;
ret = devm_regulator_bulk_get(dev, ARRAY_SIZE(mpu3050->regs),
mpu3050->regs);
- if (ret) {
- dev_err(dev, "Cannot get regulators\n");
- return ret;
- }
+ if (ret)
+ return dev_err_probe(dev, ret, "Cannot get regulators\n");

ret = mpu3050_power_up(mpu3050);
if (ret)
diff --git a/drivers/iio/light/si1145.c b/drivers/iio/light/si1145.c
index 66abda021696a..1050bf68e076e 100644
--- a/drivers/iio/light/si1145.c
+++ b/drivers/iio/light/si1145.c
@@ -1250,7 +1250,7 @@ static int si1145_probe_trigger(struct iio_dev *indio_dev)

ret = devm_request_irq(&client->dev, client->irq,
iio_trigger_generic_data_rdy_poll,
- IRQF_TRIGGER_FALLING,
+ IRQF_TRIGGER_FALLING | IRQF_NO_THREAD,
"si1145_irq",
trig);
if (ret < 0) {
diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c
index 18077fb463a91..6975e0af50536 100644
--- a/drivers/iio/magnetometer/ak8975.c
+++ b/drivers/iio/magnetometer/ak8975.c
@@ -581,7 +581,7 @@ static int ak8975_setup_irq(struct ak8975_data *data)
irq = gpiod_to_irq(data->eoc_gpiod);

rc = devm_request_irq(&client->dev, irq, ak8975_irq_handler,
- IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+ IRQF_TRIGGER_RISING,
dev_name(&client->dev), data);
if (rc < 0) {
dev_err(&client->dev, "irq %d request failed: %d\n", irq, rc);
diff --git a/drivers/iio/pressure/mprls0025pa.c b/drivers/iio/pressure/mprls0025pa.c
index 3b6145348c2e3..e13e6a7ef8db9 100644
--- a/drivers/iio/pressure/mprls0025pa.c
+++ b/drivers/iio/pressure/mprls0025pa.c
@@ -59,7 +59,7 @@
*
* Values given to the userspace in sysfs interface:
* * raw - press_cnt
- * * offset - (-1 * outputmin) - pmin / scale
+ * * offset - (-1 * outputmin) + pmin / scale
* note: With all sensors from the datasheet pmin = 0
* which reduces the offset to (-1 * outputmin)
*/
@@ -160,8 +160,8 @@ static const struct iio_chan_spec mpr_channels[] = {
BIT(IIO_CHAN_INFO_OFFSET),
.scan_index = 0,
.scan_type = {
- .sign = 's',
- .realbits = 32,
+ .sign = 'u',
+ .realbits = 24,
.storagebits = 32,
.endianness = IIO_CPU,
},
@@ -313,8 +313,7 @@ static int mpr_read_raw(struct iio_dev *indio_dev,
return IIO_VAL_INT_PLUS_NANO;
case IIO_CHAN_INFO_OFFSET:
*val = data->offset;
- *val2 = data->offset2;
- return IIO_VAL_INT_PLUS_NANO;
+ return IIO_VAL_INT;
default:
return -EINVAL;
}
@@ -330,8 +329,9 @@ int mpr_common_probe(struct device *dev, const struct mpr_ops *ops, int irq)
struct mpr_data *data;
struct iio_dev *indio_dev;
const char *triplet;
- s64 scale, offset;
+ s64 odelta, pdelta;
u32 func;
+ s32 tmp;

indio_dev = devm_iio_device_alloc(dev, sizeof(*data));
if (!indio_dev)
@@ -405,23 +405,17 @@ int mpr_common_probe(struct device *dev, const struct mpr_ops *ops, int irq)
data->outmin = mpr_func_spec[data->function].output_min;
data->outmax = mpr_func_spec[data->function].output_max;

- /* use 64 bit calculation for preserving a reasonable precision */
- scale = div_s64(((s64)(data->pmax - data->pmin)) * NANO,
- data->outmax - data->outmin);
- data->scale = div_s64_rem(scale, NANO, &data->scale2);
- /*
- * multiply with NANO before dividing by scale and later divide by NANO
- * again.
- */
- offset = ((-1LL) * (s64)data->outmin) * NANO -
- div_s64(div_s64((s64)data->pmin * NANO, scale), NANO);
- data->offset = div_s64_rem(offset, NANO, &data->offset2);
+ odelta = data->outmax - data->outmin;
+ pdelta = data->pmax - data->pmin;
+
+ data->scale = div_s64_rem(div_s64(pdelta * NANO, odelta), NANO, &tmp);
+ data->scale2 = tmp;
+
+ data->offset = div_s64(odelta * data->pmin, pdelta) - data->outmin;

if (data->irq > 0) {
- ret = devm_request_irq(dev, data->irq, mpr_eoc_handler,
- IRQF_TRIGGER_RISING,
- dev_name(dev),
- data);
+ ret = devm_request_irq(dev, data->irq, mpr_eoc_handler, 0,
+ dev_name(dev), data);
if (ret)
return dev_err_probe(dev, ret,
"request irq %d failed\n", data->irq);
diff --git a/drivers/iio/pressure/mprls0025pa.h b/drivers/iio/pressure/mprls0025pa.h
index d62a018eaff32..b6944b3051267 100644
--- a/drivers/iio/pressure/mprls0025pa.h
+++ b/drivers/iio/pressure/mprls0025pa.h
@@ -53,7 +53,6 @@ enum mpr_func_id {
* @scale: pressure scale
* @scale2: pressure scale, decimal number
* @offset: pressure offset
- * @offset2: pressure offset, decimal number
* @gpiod_reset: reset
* @irq: end of conversion irq. used to distinguish between irq mode and
* reading in a loop until data is ready
@@ -75,7 +74,6 @@ struct mpr_data {
int scale;
int scale2;
int offset;
- int offset2;
struct gpio_desc *gpiod_reset;
int irq;
struct completion completion;
diff --git a/drivers/iio/pressure/mprls0025pa_spi.c b/drivers/iio/pressure/mprls0025pa_spi.c
index 3aed14cd95c5a..241ad36f6501a 100644
--- a/drivers/iio/pressure/mprls0025pa_spi.c
+++ b/drivers/iio/pressure/mprls0025pa_spi.c
@@ -8,6 +8,7 @@
* https://prod-edam.honeywell.com/content/dam/honeywell-edam/sps/siot/en-us/products/sensors/pressure-sensors/board-mount-pressure-sensors/micropressure-mpr-series/documents/sps-siot-mpr-series-datasheet-32332628-ciid-172626.pdf
*/

+#include <linux/array_size.h>
#include <linux/device.h>
#include <linux/errno.h>
#include <linux/mod_devicetable.h>
@@ -40,17 +41,25 @@ static int mpr_spi_xfer(struct mpr_data *data, const u8 cmd, const u8 pkt_len)
{
struct spi_device *spi = to_spi_device(data->dev);
struct mpr_spi_buf *buf = spi_get_drvdata(spi);
- struct spi_transfer xfer;
+ struct spi_transfer xfers[2] = { };

if (pkt_len > MPR_MEASUREMENT_RD_SIZE)
return -EOVERFLOW;

buf->tx[0] = cmd;
- xfer.tx_buf = buf->tx;
- xfer.rx_buf = data->buffer;
- xfer.len = pkt_len;

- return spi_sync_transfer(spi, &xfer, 1);
+ /*
+ * Dummy transfer with no data, just cause a 2.5us+ delay between the CS assert
+ * and the first clock edge as per the datasheet tHDSS timing requirement.
+ */
+ xfers[0].delay.value = 2500;
+ xfers[0].delay.unit = SPI_DELAY_UNIT_NSECS;
+
+ xfers[1].tx_buf = buf->tx;
+ xfers[1].rx_buf = data->buffer;
+ xfers[1].len = pkt_len;
+
+ return spi_sync_transfer(spi, xfers, ARRAY_SIZE(xfers));
}

static const struct mpr_ops mpr_spi_ops = {
diff --git a/drivers/iio/test/Kconfig b/drivers/iio/test/Kconfig
index 33cca49c8058a..3b6d9b1476d8c 100644
--- a/drivers/iio/test/Kconfig
+++ b/drivers/iio/test/Kconfig
@@ -8,7 +8,6 @@ config IIO_GTS_KUNIT_TEST
tristate "Test IIO formatting functions" if !KUNIT_ALL_TESTS
depends on KUNIT
select IIO_GTS_HELPER
- select TEST_KUNIT_DEVICE_HELPERS
default KUNIT_ALL_TESTS
help
build unit tests for the IIO light sensor gain-time-scale helpers.
diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c
index a1291f475466d..f513c4fe9f38d 100644
--- a/drivers/infiniband/core/cache.c
+++ b/drivers/infiniband/core/cache.c
@@ -927,6 +927,13 @@ static int gid_table_setup_one(struct ib_device *ib_dev)
if (err)
return err;

+ /*
+ * Mark the device as ready for GID cache updates. This allows netdev
+ * event handlers to update the GID cache even before the device is
+ * fully registered.
+ */
+ ib_device_enable_gid_updates(ib_dev);
+
rdma_roce_rescan_device(ib_dev);

return err;
@@ -1566,7 +1573,8 @@ static void ib_cache_event_task(struct work_struct *_work)
* the cache.
*/
ret = ib_cache_update(work->event.device, work->event.element.port_num,
- work->event.event == IB_EVENT_GID_CHANGE,
+ work->event.event == IB_EVENT_GID_CHANGE ||
+ work->event.event == IB_EVENT_CLIENT_REREGISTER,
work->event.event == IB_EVENT_PKEY_CHANGE,
work->enforce_security);

@@ -1667,6 +1675,12 @@ void ib_cache_release_one(struct ib_device *device)

void ib_cache_cleanup_one(struct ib_device *device)
{
+ /*
+ * Clear the GID updates mark first to prevent event handlers from
+ * accessing the device while it's being torn down.
+ */
+ ib_device_disable_gid_updates(device);
+
/* The cleanup function waits for all in-progress workqueue
* elements and cleans up the GID cache. This function should be
* called after the device was removed from the devices list and
diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h
index 05102769a918a..a2c36666e6fcb 100644
--- a/drivers/infiniband/core/core_priv.h
+++ b/drivers/infiniband/core/core_priv.h
@@ -100,6 +100,9 @@ void ib_enum_all_roce_netdevs(roce_netdev_filter filter,
roce_netdev_callback cb,
void *cookie);

+void ib_device_enable_gid_updates(struct ib_device *device);
+void ib_device_disable_gid_updates(struct ib_device *device);
+
typedef int (*nldev_callback)(struct ib_device *device,
struct sk_buff *skb,
struct netlink_callback *cb,
diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index bbc131737378e..666ce83cc35c4 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -93,6 +93,7 @@ static struct workqueue_struct *ib_unreg_wq;
static DEFINE_XARRAY_FLAGS(devices, XA_FLAGS_ALLOC);
static DECLARE_RWSEM(devices_rwsem);
#define DEVICE_REGISTERED XA_MARK_1
+#define DEVICE_GID_UPDATES XA_MARK_2

static u32 highest_client_id;
#define CLIENT_REGISTERED XA_MARK_1
@@ -2393,11 +2394,42 @@ void ib_enum_all_roce_netdevs(roce_netdev_filter filter,
unsigned long index;

down_read(&devices_rwsem);
- xa_for_each_marked (&devices, index, dev, DEVICE_REGISTERED)
+ xa_for_each_marked(&devices, index, dev, DEVICE_GID_UPDATES)
ib_enum_roce_netdev(dev, filter, filter_cookie, cb, cookie);
up_read(&devices_rwsem);
}

+/**
+ * ib_device_enable_gid_updates - Mark device as ready for GID cache updates
+ * @device: Device to mark
+ *
+ * Called after GID table is allocated and initialized. After this mark is set,
+ * netdevice event handlers can update the device's GID cache. This allows
+ * events that arrive during device registration to be processed, avoiding
+ * stale GID entries when netdev properties change during the device
+ * registration process.
+ */
+void ib_device_enable_gid_updates(struct ib_device *device)
+{
+ down_write(&devices_rwsem);
+ xa_set_mark(&devices, device->index, DEVICE_GID_UPDATES);
+ up_write(&devices_rwsem);
+}
+
+/**
+ * ib_device_disable_gid_updates - Clear the GID updates mark
+ * @device: Device to unmark
+ *
+ * Called before GID table cleanup to prevent event handlers from accessing
+ * the device while it's being torn down.
+ */
+void ib_device_disable_gid_updates(struct ib_device *device)
+{
+ down_write(&devices_rwsem);
+ xa_clear_mark(&devices, device->index, DEVICE_GID_UPDATES);
+ up_write(&devices_rwsem);
+}
+
/*
* ib_enum_all_devs - enumerate all ib_devices
* @cb: Callback to call for each found ib_device
diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
index 96a678250e553..3758ee7698224 100644
--- a/drivers/infiniband/core/iwcm.c
+++ b/drivers/infiniband/core/iwcm.c
@@ -95,7 +95,6 @@ static struct workqueue_struct *iwcm_wq;
struct iwcm_work {
struct work_struct work;
struct iwcm_id_private *cm_id;
- struct list_head list;
struct iw_cm_event event;
struct list_head free_list;
};
@@ -176,7 +175,6 @@ static int alloc_work_entries(struct iwcm_id_private *cm_id_priv, int count)
return -ENOMEM;
}
work->cm_id = cm_id_priv;
- INIT_LIST_HEAD(&work->list);
put_work(work);
}
return 0;
@@ -211,7 +209,6 @@ static void free_cm_id(struct iwcm_id_private *cm_id_priv)
static bool iwcm_deref_id(struct iwcm_id_private *cm_id_priv)
{
if (refcount_dec_and_test(&cm_id_priv->refcount)) {
- BUG_ON(!list_empty(&cm_id_priv->work_list));
free_cm_id(cm_id_priv);
return true;
}
@@ -258,7 +255,6 @@ struct iw_cm_id *iw_create_cm_id(struct ib_device *device,
refcount_set(&cm_id_priv->refcount, 1);
init_waitqueue_head(&cm_id_priv->connect_wait);
init_completion(&cm_id_priv->destroy_comp);
- INIT_LIST_HEAD(&cm_id_priv->work_list);
INIT_LIST_HEAD(&cm_id_priv->work_free_list);

return &cm_id_priv->id;
@@ -1005,13 +1001,13 @@ static int process_event(struct iwcm_id_private *cm_id_priv,
}

/*
- * Process events on the work_list for the cm_id. If the callback
- * function requests that the cm_id be deleted, a flag is set in the
- * cm_id flags to indicate that when the last reference is
- * removed, the cm_id is to be destroyed. This is necessary to
- * distinguish between an object that will be destroyed by the app
- * thread asleep on the destroy_comp list vs. an object destroyed
- * here synchronously when the last reference is removed.
+ * Process events for the cm_id. If the callback function requests
+ * that the cm_id be deleted, a flag is set in the cm_id flags to
+ * indicate that when the last reference is removed, the cm_id is
+ * to be destroyed. This is necessary to distinguish between an
+ * object that will be destroyed by the app thread asleep on the
+ * destroy_comp list vs. an object destroyed here synchronously
+ * when the last reference is removed.
*/
static void cm_work_handler(struct work_struct *_work)
{
@@ -1022,35 +1018,26 @@ static void cm_work_handler(struct work_struct *_work)
int ret = 0;

spin_lock_irqsave(&cm_id_priv->lock, flags);
- while (!list_empty(&cm_id_priv->work_list)) {
- work = list_first_entry(&cm_id_priv->work_list,
- struct iwcm_work, list);
- list_del_init(&work->list);
- levent = work->event;
- put_work(work);
- spin_unlock_irqrestore(&cm_id_priv->lock, flags);
-
- if (!test_bit(IWCM_F_DROP_EVENTS, &cm_id_priv->flags)) {
- ret = process_event(cm_id_priv, &levent);
- if (ret) {
- destroy_cm_id(&cm_id_priv->id);
- WARN_ON_ONCE(iwcm_deref_id(cm_id_priv));
- }
- } else
- pr_debug("dropping event %d\n", levent.event);
- if (iwcm_deref_id(cm_id_priv))
- return;
- spin_lock_irqsave(&cm_id_priv->lock, flags);
- }
+ levent = work->event;
+ put_work(work);
spin_unlock_irqrestore(&cm_id_priv->lock, flags);
+
+ if (!test_bit(IWCM_F_DROP_EVENTS, &cm_id_priv->flags)) {
+ ret = process_event(cm_id_priv, &levent);
+ if (ret) {
+ destroy_cm_id(&cm_id_priv->id);
+ WARN_ON_ONCE(iwcm_deref_id(cm_id_priv));
+ }
+ } else
+ pr_debug("dropping event %d\n", levent.event);
+ if (iwcm_deref_id(cm_id_priv))
+ return;
}

/*
* This function is called on interrupt context. Schedule events on
* the iwcm_wq thread to allow callback functions to downcall into
- * the CM and/or block. Events are queued to a per-CM_ID
- * work_list. If this is the first event on the work_list, the work
- * element is also queued on the iwcm_wq thread.
+ * the CM and/or block.
*
* Each event holds a reference on the cm_id. Until the last posted
* event has been delivered and processed, the cm_id cannot be
@@ -1092,7 +1079,6 @@ static int cm_event_handler(struct iw_cm_id *cm_id,
}

refcount_inc(&cm_id_priv->refcount);
- list_add_tail(&work->list, &cm_id_priv->work_list);
queue_work(iwcm_wq, &work->work);
out:
spin_unlock_irqrestore(&cm_id_priv->lock, flags);
diff --git a/drivers/infiniband/core/iwcm.h b/drivers/infiniband/core/iwcm.h
index bf74639be1287..b56fb12edece4 100644
--- a/drivers/infiniband/core/iwcm.h
+++ b/drivers/infiniband/core/iwcm.h
@@ -50,7 +50,6 @@ struct iwcm_id_private {
struct ib_qp *qp;
struct completion destroy_comp;
wait_queue_head_t connect_wait;
- struct list_head work_list;
spinlock_t lock;
refcount_t refcount;
struct list_head work_free_list;
diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
index 6354ddf2a274c..2522ff1cc462c 100644
--- a/drivers/infiniband/core/rw.c
+++ b/drivers/infiniband/core/rw.c
@@ -651,34 +651,57 @@ unsigned int rdma_rw_mr_factor(struct ib_device *device, u32 port_num,
}
EXPORT_SYMBOL(rdma_rw_mr_factor);

+/**
+ * rdma_rw_max_send_wr - compute max Send WRs needed for RDMA R/W contexts
+ * @dev: RDMA device
+ * @port_num: port number
+ * @max_rdma_ctxs: number of rdma_rw_ctx structures
+ * @create_flags: QP create flags (pass IB_QP_CREATE_INTEGRITY_EN if
+ * data integrity will be enabled on the QP)
+ *
+ * Returns the total number of Send Queue entries needed for
+ * @max_rdma_ctxs. The result accounts for memory registration and
+ * invalidation work requests when the device requires them.
+ *
+ * ULPs use this to size Send Queues and Send CQs before creating a
+ * Queue Pair.
+ */
+unsigned int rdma_rw_max_send_wr(struct ib_device *dev, u32 port_num,
+ unsigned int max_rdma_ctxs, u32 create_flags)
+{
+ unsigned int factor = 1;
+ unsigned int result;
+
+ if (create_flags & IB_QP_CREATE_INTEGRITY_EN ||
+ rdma_rw_can_use_mr(dev, port_num))
+ factor += 2; /* reg + inv */
+
+ if (check_mul_overflow(factor, max_rdma_ctxs, &result))
+ return UINT_MAX;
+ return result;
+}
+EXPORT_SYMBOL(rdma_rw_max_send_wr);
+
void rdma_rw_init_qp(struct ib_device *dev, struct ib_qp_init_attr *attr)
{
- u32 factor;
+ unsigned int factor = 1;

WARN_ON_ONCE(attr->port_num == 0);

/*
- * Each context needs at least one RDMA READ or WRITE WR.
- *
- * For some hardware we might need more, eventually we should ask the
- * HCA driver for a multiplier here.
- */
- factor = 1;
-
- /*
- * If the device needs MRs to perform RDMA READ or WRITE operations,
- * we'll need two additional MRs for the registrations and the
- * invalidation.
+ * If the device uses MRs to perform RDMA READ or WRITE operations,
+ * or if data integrity is enabled, account for registration and
+ * invalidation work requests.
*/
if (attr->create_flags & IB_QP_CREATE_INTEGRITY_EN ||
rdma_rw_can_use_mr(dev, attr->port_num))
- factor += 2; /* inv + reg */
+ factor += 2; /* reg + inv */

attr->cap.max_send_wr += factor * attr->cap.max_rdma_ctxs;

/*
- * But maybe we were just too high in the sky and the device doesn't
- * even support all we need, and we'll have to live with what we get..
+ * The device might not support all we need, and we'll have to
+ * live with what we get.
*/
attr->cap.max_send_wr =
min_t(u32, attr->cap.max_send_wr, dev->attrs.max_qp_wr);
diff --git a/drivers/infiniband/core/umem_dmabuf.c b/drivers/infiniband/core/umem_dmabuf.c
index 9fcd37761264a..44bb7449738cf 100644
--- a/drivers/infiniband/core/umem_dmabuf.c
+++ b/drivers/infiniband/core/umem_dmabuf.c
@@ -221,13 +221,11 @@ ib_umem_dmabuf_get_pinned_with_dma_device(struct ib_device *device,

err = ib_umem_dmabuf_map_pages(umem_dmabuf);
if (err)
- goto err_unpin;
+ goto err_release;
dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv);

return umem_dmabuf;

-err_unpin:
- dma_buf_unpin(umem_dmabuf->attach);
err_release:
dma_resv_unlock(umem_dmabuf->attach->dmabuf->resv);
ib_umem_release(&umem_dmabuf->umem);
diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
index fd67fc9fe85a4..2f7e3c4483fc5 100644
--- a/drivers/infiniband/core/user_mad.c
+++ b/drivers/infiniband/core/user_mad.c
@@ -514,7 +514,8 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
struct rdma_ah_attr ah_attr;
struct ib_ah *ah;
__be64 *tid;
- int ret, data_len, hdr_len, copy_offset, rmpp_active;
+ int ret, hdr_len, copy_offset, rmpp_active;
+ size_t data_len;
u8 base_version;

if (count < hdr_size(file) + IB_MGMT_RMPP_HDR)
@@ -588,7 +589,10 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
}

base_version = ((struct ib_mad_hdr *)&packet->mad.data)->base_version;
- data_len = count - hdr_size(file) - hdr_len;
+ if (check_sub_overflow(count, hdr_size(file) + hdr_len, &data_len)) {
+ ret = -EINVAL;
+ goto err_ah;
+ }
packet->msg = ib_create_send_mad(agent,
be32_to_cpu(packet->mad.hdr.qpn),
packet->mad.hdr.pkey_index, rmpp_active,
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 535bb99ed9f5f..ac81b7d1eec96 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -2031,7 +2031,10 @@ static int ib_uverbs_post_send(struct uverbs_attr_bundle *attrs)
if (ret)
return ret;

- user_wr = kmalloc(cmd.wqe_size, GFP_KERNEL);
+ if (cmd.wqe_size < sizeof(struct ib_uverbs_send_wr))
+ return -EINVAL;
+
+ user_wr = kmalloc(cmd.wqe_size, GFP_KERNEL | __GFP_NOWARN);
if (!user_wr)
return -ENOMEM;

@@ -2221,7 +2224,7 @@ ib_uverbs_unmarshall_recv(struct uverbs_req_iter *iter, u32 wr_count,
if (ret)
return ERR_PTR(ret);

- user_wr = kmalloc(wqe_size, GFP_KERNEL);
+ user_wr = kmalloc(wqe_size, GFP_KERNEL | __GFP_NOWARN);
if (!user_wr)
return ERR_PTR(-ENOMEM);

diff --git a/drivers/infiniband/hw/efa/efa_verbs.c b/drivers/infiniband/hw/efa/efa_verbs.c
index 46eddef7a1cc9..fb6b29972fcce 100644
--- a/drivers/infiniband/hw/efa/efa_verbs.c
+++ b/drivers/infiniband/hw/efa/efa_verbs.c
@@ -1584,7 +1584,7 @@ static struct efa_mr *efa_alloc_mr(struct ib_pd *ibpd, int access_flags,
struct efa_mr *mr;

if (udata && udata->inlen &&
- !ib_is_udata_cleared(udata, 0, sizeof(udata->inlen))) {
+ !ib_is_udata_cleared(udata, 0, udata->inlen)) {
ibdev_dbg(&dev->ibdev,
"Incompatible ABI params, udata not cleared\n");
return ERR_PTR(-EINVAL);
diff --git a/drivers/infiniband/hw/hns/hns_roce_ah.c b/drivers/infiniband/hw/hns/hns_roce_ah.c
index 307c35888b300..3b6c6a6e9f977 100644
--- a/drivers/infiniband/hw/hns/hns_roce_ah.c
+++ b/drivers/infiniband/hw/hns/hns_roce_ah.c
@@ -61,7 +61,7 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
u8 tclass = get_tclass(grh);
u8 priority = 0;
u8 tc_mode = 0;
- int ret;
+ int ret = 0;

if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP08 && udata) {
ret = -EOPNOTSUPP;
@@ -78,19 +78,18 @@ int hns_roce_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
ah->av.flowlabel = grh->flow_label;
ah->av.udp_sport = get_ah_udp_sport(ah_attr);
ah->av.tclass = tclass;
+ ah->av.sl = rdma_ah_get_sl(ah_attr);

- ret = hr_dev->hw->get_dscp(hr_dev, tclass, &tc_mode, &priority);
- if (ret == -EOPNOTSUPP)
- ret = 0;
-
- if (ret && grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP)
- goto err_out;
+ if (grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP) {
+ ret = hr_dev->hw->get_dscp(hr_dev, tclass, &tc_mode, &priority);
+ if (ret == -EOPNOTSUPP)
+ ret = 0;
+ else if (ret)
+ goto err_out;

- if (tc_mode == HNAE3_TC_MAP_MODE_DSCP &&
- grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP)
- ah->av.sl = priority;
- else
- ah->av.sl = rdma_ah_get_sl(ah_attr);
+ if (tc_mode == HNAE3_TC_MAP_MODE_DSCP)
+ ah->av.sl = priority;
+ }

if (!check_sl_valid(hr_dev, ah->av.sl)) {
ret = -EINVAL;
diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
index f9356cb89497b..5e1ea6335c113 100644
--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
@@ -3683,6 +3683,23 @@ static void hns_roce_v2_write_cqc(struct hns_roce_dev *hr_dev,
HNS_ROCE_V2_CQ_DEFAULT_INTERVAL);
}

+static bool left_sw_wc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
+{
+ struct hns_roce_qp *hr_qp;
+
+ list_for_each_entry(hr_qp, &hr_cq->sq_list, sq_node) {
+ if (hr_qp->sq.head != hr_qp->sq.tail)
+ return true;
+ }
+
+ list_for_each_entry(hr_qp, &hr_cq->rq_list, rq_node) {
+ if (hr_qp->rq.head != hr_qp->rq.tail)
+ return true;
+ }
+
+ return false;
+}
+
static int hns_roce_v2_req_notify_cq(struct ib_cq *ibcq,
enum ib_cq_notify_flags flags)
{
@@ -3691,6 +3708,12 @@ static int hns_roce_v2_req_notify_cq(struct ib_cq *ibcq,
struct hns_roce_v2_db cq_db = {};
u32 notify_flag;

+ if (hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN) {
+ if ((flags & IB_CQ_REPORT_MISSED_EVENTS) &&
+ left_sw_wc(hr_dev, hr_cq))
+ return 1;
+ return 0;
+ }
/*
* flags = 0, then notify_flag : next
* flags = 1, then notify flag : solocited
@@ -4999,20 +5022,22 @@ static int hns_roce_set_sl(struct ib_qp *ibqp,
struct ib_device *ibdev = &hr_dev->ib_dev;
int ret;

- ret = hns_roce_hw_v2_get_dscp(hr_dev, get_tclass(&attr->ah_attr.grh),
- &hr_qp->tc_mode, &hr_qp->priority);
- if (ret && ret != -EOPNOTSUPP &&
- grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP) {
- ibdev_err_ratelimited(ibdev,
- "failed to get dscp, ret = %d.\n", ret);
- return ret;
- }
+ hr_qp->sl = rdma_ah_get_sl(&attr->ah_attr);

- if (hr_qp->tc_mode == HNAE3_TC_MAP_MODE_DSCP &&
- grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP)
- hr_qp->sl = hr_qp->priority;
- else
- hr_qp->sl = rdma_ah_get_sl(&attr->ah_attr);
+ if (grh->sgid_attr->gid_type == IB_GID_TYPE_ROCE_UDP_ENCAP) {
+ ret = hns_roce_hw_v2_get_dscp(hr_dev,
+ get_tclass(&attr->ah_attr.grh),
+ &hr_qp->tc_mode, &hr_qp->priority);
+ if (ret && ret != -EOPNOTSUPP) {
+ ibdev_err_ratelimited(ibdev,
+ "failed to get dscp, ret = %d.\n",
+ ret);
+ return ret;
+ }
+
+ if (hr_qp->tc_mode == HNAE3_TC_MAP_MODE_DSCP)
+ hr_qp->sl = hr_qp->priority;
+ }

if (!check_sl_valid(hr_dev, hr_qp->sl))
return -EINVAL;
@@ -6899,7 +6924,8 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)

INIT_WORK(&hr_dev->ecc_work, fmea_ram_ecc_work);

- hr_dev->irq_workq = alloc_ordered_workqueue("hns_roce_irq_workq", 0);
+ hr_dev->irq_workq = alloc_ordered_workqueue("hns_roce_irq_workq",
+ WQ_MEM_RECLAIM);
if (!hr_dev->irq_workq) {
dev_err(dev, "failed to create irq workqueue.\n");
ret = -ENOMEM;
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index f3e58797705d7..8b2e13f1a2159 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -539,12 +539,20 @@ static int mlx5_query_port_roce(struct ib_device *device, u32 port_num,
* of an error it will still be zeroed out.
* Use native port in case of reps
*/
- if (dev->is_rep)
- err = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN,
- 1, 0);
- else
- err = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN,
- mdev_port_num, 0);
+ if (dev->is_rep) {
+ struct mlx5_eswitch_rep *rep;
+
+ rep = dev->port[port_num - 1].rep;
+ if (rep) {
+ mdev = mlx5_eswitch_get_core_dev(rep->esw);
+ WARN_ON(!mdev);
+ }
+ mdev_port_num = 1;
+ }
+
+ err = mlx5_query_port_ptys(mdev, out, sizeof(out), MLX5_PTYS_EN,
+ mdev_port_num, 0);
+
if (err)
goto out;
ext = !!MLX5_GET_ETH_PROTO(ptys_reg, out, true, eth_proto_capability);
@@ -2826,7 +2834,6 @@ static void mlx5_ib_handle_event(struct work_struct *_work)
container_of(_work, struct mlx5_ib_event_work, work);
struct mlx5_ib_dev *ibdev;
struct ib_event ibev;
- bool fatal = false;

if (work->is_slave) {
ibdev = mlx5_ib_get_ibdev_from_mpi(work->mpi);
@@ -2837,12 +2844,6 @@ static void mlx5_ib_handle_event(struct work_struct *_work)
}

switch (work->event) {
- case MLX5_DEV_EVENT_SYS_ERROR:
- ibev.event = IB_EVENT_DEVICE_FATAL;
- mlx5_ib_handle_internal_error(ibdev);
- ibev.element.port_num = (u8)(unsigned long)work->param;
- fatal = true;
- break;
case MLX5_EVENT_TYPE_PORT_CHANGE:
if (handle_port_change(ibdev, work->param, &ibev))
goto out;
@@ -2864,8 +2865,6 @@ static void mlx5_ib_handle_event(struct work_struct *_work)
if (ibdev->ib_active)
ib_dispatch_event(&ibev);

- if (fatal)
- ibdev->ib_active = false;
out:
kfree(work);
}
@@ -2909,6 +2908,66 @@ static int mlx5_ib_event_slave_port(struct notifier_block *nb,
return NOTIFY_OK;
}

+static void mlx5_ib_handle_sys_error_event(struct work_struct *_work)
+{
+ struct mlx5_ib_event_work *work =
+ container_of(_work, struct mlx5_ib_event_work, work);
+ struct mlx5_ib_dev *ibdev = work->dev;
+ struct ib_event ibev;
+
+ ibev.event = IB_EVENT_DEVICE_FATAL;
+ mlx5_ib_handle_internal_error(ibdev);
+ ibev.element.port_num = (u8)(unsigned long)work->param;
+ ibev.device = &ibdev->ib_dev;
+
+ if (!rdma_is_port_valid(&ibdev->ib_dev, ibev.element.port_num)) {
+ mlx5_ib_warn(ibdev, "warning: event on port %d\n", ibev.element.port_num);
+ goto out;
+ }
+
+ if (ibdev->ib_active)
+ ib_dispatch_event(&ibev);
+
+ ibdev->ib_active = false;
+out:
+ kfree(work);
+}
+
+static int mlx5_ib_sys_error_event(struct notifier_block *nb,
+ unsigned long event, void *param)
+{
+ struct mlx5_ib_event_work *work;
+
+ if (event != MLX5_DEV_EVENT_SYS_ERROR)
+ return NOTIFY_DONE;
+
+ work = kmalloc(sizeof(*work), GFP_ATOMIC);
+ if (!work)
+ return NOTIFY_DONE;
+
+ INIT_WORK(&work->work, mlx5_ib_handle_sys_error_event);
+ work->dev = container_of(nb, struct mlx5_ib_dev, sys_error_events);
+ work->is_slave = false;
+ work->param = param;
+ work->event = event;
+
+ queue_work(mlx5_ib_event_wq, &work->work);
+
+ return NOTIFY_OK;
+}
+
+static int mlx5_ib_stage_sys_error_notifier_init(struct mlx5_ib_dev *dev)
+{
+ dev->sys_error_events.notifier_call = mlx5_ib_sys_error_event;
+ mlx5_notifier_register(dev->mdev, &dev->sys_error_events);
+ return 0;
+}
+
+static void mlx5_ib_stage_sys_error_notifier_cleanup(struct mlx5_ib_dev *dev)
+{
+ mlx5_notifier_unregister(dev->mdev, &dev->sys_error_events);
+}
+
static int mlx5_ib_get_plane_num(struct mlx5_core_dev *mdev, u8 *num_plane)
{
struct mlx5_hca_vport_context vport_ctx;
@@ -4682,6 +4741,9 @@ static const struct mlx5_ib_profile pf_profile = {
STAGE_CREATE(MLX5_IB_STAGE_WHITELIST_UID,
mlx5_ib_devx_init,
mlx5_ib_devx_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_SYS_ERROR_NOTIFIER,
+ mlx5_ib_stage_sys_error_notifier_init,
+ mlx5_ib_stage_sys_error_notifier_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
mlx5_ib_stage_ib_reg_init,
mlx5_ib_stage_ib_reg_cleanup),
@@ -4742,6 +4804,9 @@ const struct mlx5_ib_profile raw_eth_profile = {
STAGE_CREATE(MLX5_IB_STAGE_WHITELIST_UID,
mlx5_ib_devx_init,
mlx5_ib_devx_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_SYS_ERROR_NOTIFIER,
+ mlx5_ib_stage_sys_error_notifier_init,
+ mlx5_ib_stage_sys_error_notifier_cleanup),
STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
mlx5_ib_stage_ib_reg_init,
mlx5_ib_stage_ib_reg_cleanup),
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index f49cb588a856d..3135519f1cfdf 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -979,6 +979,7 @@ enum mlx5_ib_stages {
MLX5_IB_STAGE_BFREG,
MLX5_IB_STAGE_PRE_IB_REG_UMR,
MLX5_IB_STAGE_WHITELIST_UID,
+ MLX5_IB_STAGE_SYS_ERROR_NOTIFIER,
MLX5_IB_STAGE_IB_REG,
MLX5_IB_STAGE_DEVICE_NOTIFIER,
MLX5_IB_STAGE_POST_IB_REG_UMR,
@@ -1137,6 +1138,7 @@ struct mlx5_ib_dev {
/* protect accessing data_direct_dev */
struct mutex data_direct_lock;
struct notifier_block mdev_events;
+ struct notifier_block sys_error_events;
struct notifier_block lag_events;
int num_ports;
/* serialize update of capability mask
diff --git a/drivers/infiniband/hw/mlx5/std_types.c b/drivers/infiniband/hw/mlx5/std_types.c
index bdb568411091c..d0137ab7c645c 100644
--- a/drivers/infiniband/hw/mlx5/std_types.c
+++ b/drivers/infiniband/hw/mlx5/std_types.c
@@ -214,7 +214,7 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_GET_DATA_DIRECT_SYSFS_PATH)(
int out_len = uverbs_attr_get_len(attrs,
MLX5_IB_ATTR_GET_DATA_DIRECT_SYSFS_PATH);
u32 dev_path_len;
- char *dev_path;
+ char *dev_path = NULL;
int ret;

c = to_mucontext(ib_uverbs_get_ucontext(attrs));
@@ -242,9 +242,9 @@ static int UVERBS_HANDLER(MLX5_IB_METHOD_GET_DATA_DIRECT_SYSFS_PATH)(

ret = uverbs_copy_to(attrs, MLX5_IB_ATTR_GET_DATA_DIRECT_SYSFS_PATH, dev_path,
dev_path_len);
- kfree(dev_path);

end:
+ kfree(dev_path);
mutex_unlock(&dev->data_direct_lock);
return ret;
}
diff --git a/drivers/infiniband/sw/rxe/rxe_comp.c b/drivers/infiniband/sw/rxe/rxe_comp.c
index d48af21807458..e02c5df0bef14 100644
--- a/drivers/infiniband/sw/rxe/rxe_comp.c
+++ b/drivers/infiniband/sw/rxe/rxe_comp.c
@@ -119,12 +119,15 @@ void retransmit_timer(struct timer_list *t)

rxe_dbg_qp(qp, "retransmit timer fired\n");

+ if (!rxe_get(qp))
+ return;
spin_lock_irqsave(&qp->state_lock, flags);
if (qp->valid) {
qp->comp.timeout = 1;
rxe_sched_task(&qp->send_task);
}
spin_unlock_irqrestore(&qp->state_lock, flags);
+ rxe_put(qp);
}

void rxe_comp_queue_pkt(struct rxe_qp *qp, struct sk_buff *skb)
diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
index 87a02f0deb000..d08ebb048cb0f 100644
--- a/drivers/infiniband/sw/rxe/rxe_req.c
+++ b/drivers/infiniband/sw/rxe/rxe_req.c
@@ -103,6 +103,8 @@ void rnr_nak_timer(struct timer_list *t)

rxe_dbg_qp(qp, "nak timer fired\n");

+ if (!rxe_get(qp))
+ return;
spin_lock_irqsave(&qp->state_lock, flags);
if (qp->valid) {
/* request a send queue retry */
@@ -111,6 +113,7 @@ void rnr_nak_timer(struct timer_list *t)
rxe_sched_task(&qp->send_task);
}
spin_unlock_irqrestore(&qp->state_lock, flags);
+ rxe_put(qp);
}

static void req_check_sq_drain_done(struct rxe_qp *qp)
diff --git a/drivers/infiniband/sw/rxe/rxe_srq.c b/drivers/infiniband/sw/rxe/rxe_srq.c
index 2a234f26ac104..c9a7cd38953d3 100644
--- a/drivers/infiniband/sw/rxe/rxe_srq.c
+++ b/drivers/infiniband/sw/rxe/rxe_srq.c
@@ -77,9 +77,6 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
goto err_free;
}

- srq->rq.queue = q;
- init->attr.max_wr = srq->rq.max_wr;
-
if (uresp) {
if (copy_to_user(&uresp->srq_num, &srq->srq_num,
sizeof(uresp->srq_num))) {
@@ -88,6 +85,9 @@ int rxe_srq_from_init(struct rxe_dev *rxe, struct rxe_srq *srq,
}
}

+ srq->rq.queue = q;
+ init->attr.max_wr = srq->rq.max_wr;
+
return 0;

err_free:
diff --git a/drivers/infiniband/sw/siw/siw_qp_rx.c b/drivers/infiniband/sw/siw/siw_qp_rx.c
index ed4fc39718b49..06738f582c7af 100644
--- a/drivers/infiniband/sw/siw/siw_qp_rx.c
+++ b/drivers/infiniband/sw/siw/siw_qp_rx.c
@@ -1436,7 +1436,8 @@ int siw_tcp_rx_data(read_descriptor_t *rd_desc, struct sk_buff *skb,
}
if (unlikely(rv != 0 && rv != -EAGAIN)) {
if ((srx->state > SIW_GET_HDR ||
- qp->rx_fpdu->more_ddp_segs) && run_completion)
+ (qp->rx_fpdu && qp->rx_fpdu->more_ddp_segs)) &&
+ run_completion)
siw_rdmap_complete(qp, rv);

siw_dbg_qp(qp, "rx error %d, rx state %d\n", rv,
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-clt.c b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
index 2b397a544cb93..8fa1d72bd20a4 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-clt.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-clt.c
@@ -1923,7 +1923,7 @@ static int rtrs_rdma_conn_rejected(struct rtrs_clt_con *con,
struct rtrs_path *s = con->c.path;
const struct rtrs_msg_conn_rsp *msg;
const char *rej_msg;
- int status, errno;
+ int status, errno = -ECONNRESET;
u8 data_len;

status = ev->status;
@@ -1945,7 +1945,7 @@ static int rtrs_rdma_conn_rejected(struct rtrs_clt_con *con,
status, rej_msg);
}

- return -ECONNRESET;
+ return errno;
}

void rtrs_clt_close_conns(struct rtrs_clt_path *clt_path, bool wait)
diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
index 9ecc6343455d6..adb798e2a54ae 100644
--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
@@ -208,7 +208,6 @@ static int rdma_write_sg(struct rtrs_srv_op *id)
size_t sg_cnt;
int err, offset;
bool need_inval;
- u32 rkey = 0;
struct ib_reg_wr rwr;
struct ib_sge *plist;
struct ib_sge list;
@@ -240,11 +239,6 @@ static int rdma_write_sg(struct rtrs_srv_op *id)
wr->wr.num_sge = 1;
wr->remote_addr = le64_to_cpu(id->rd_msg->desc[0].addr);
wr->rkey = le32_to_cpu(id->rd_msg->desc[0].key);
- if (rkey == 0)
- rkey = wr->rkey;
- else
- /* Only one key is actually used */
- WARN_ON_ONCE(rkey != wr->rkey);

wr->wr.opcode = IB_WR_RDMA_WRITE;
wr->wr.wr_cqe = &io_comp_cqe;
@@ -277,7 +271,7 @@ static int rdma_write_sg(struct rtrs_srv_op *id)
inv_wr.opcode = IB_WR_SEND_WITH_INV;
inv_wr.wr_cqe = &io_comp_cqe;
inv_wr.send_flags = 0;
- inv_wr.ex.invalidate_rkey = rkey;
+ inv_wr.ex.invalidate_rkey = wr->rkey;
}

imm_wr.wr.next = NULL;
@@ -601,7 +595,7 @@ static int map_cont_bufs(struct rtrs_srv_path *srv_path)
srv_path->mrs_num++) {
struct rtrs_srv_mr *srv_mr = &srv_path->mrs[srv_path->mrs_num];
struct scatterlist *s;
- int nr, nr_sgt, chunks;
+ int nr, nr_sgt, chunks, ind;

sgt = &srv_mr->sgt;
chunks = chunks_per_mr * srv_path->mrs_num;
@@ -631,7 +625,7 @@ static int map_cont_bufs(struct rtrs_srv_path *srv_path)
}
nr = ib_map_mr_sg(mr, sgt->sgl, nr_sgt,
NULL, max_chunk_size);
- if (nr != nr_sgt) {
+ if (nr < nr_sgt) {
err = nr < 0 ? nr : -EINVAL;
goto dereg_mr;
}
@@ -647,9 +641,24 @@ static int map_cont_bufs(struct rtrs_srv_path *srv_path)
goto dereg_mr;
}
}
- /* Eventually dma addr for each chunk can be cached */
- for_each_sg(sgt->sgl, s, nr_sgt, i)
- srv_path->dma_addr[chunks + i] = sg_dma_address(s);
+
+ /*
+ * Cache DMA addresses by traversing sg entries. If
+ * regions were merged, an inner loop is required to
+ * populate the DMA address array by traversing larger
+ * regions.
+ */
+ ind = chunks;
+ for_each_sg(sgt->sgl, s, nr_sgt, i) {
+ unsigned int dma_len = sg_dma_len(s);
+ u64 dma_addr = sg_dma_address(s);
+ u64 dma_addr_end = dma_addr + dma_len;
+
+ do {
+ srv_path->dma_addr[ind++] = dma_addr;
+ dma_addr += max_chunk_size;
+ } while (dma_addr < dma_addr_end);
+ }

ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
srv_mr->mr = mr;
diff --git a/drivers/interconnect/mediatek/icc-emi.c b/drivers/interconnect/mediatek/icc-emi.c
index 7da740b5fa8d6..dfa3a9cd93998 100644
--- a/drivers/interconnect/mediatek/icc-emi.c
+++ b/drivers/interconnect/mediatek/icc-emi.c
@@ -12,6 +12,7 @@
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_platform.h>
+#include <linux/overflow.h>
#include <linux/platform_device.h>
#include <linux/soc/mediatek/dvfsrc.h>

@@ -22,7 +23,9 @@ static int mtk_emi_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw,
{
struct mtk_icc_node *in = node->data;

- *agg_avg += avg_bw;
+ if (check_add_overflow(*agg_avg, avg_bw, agg_avg))
+ *agg_avg = U32_MAX;
+
*agg_peak = max_t(u32, *agg_peak, peak_bw);

in->sum_avg = *agg_avg;
@@ -40,7 +43,7 @@ static int mtk_emi_icc_set(struct icc_node *src, struct icc_node *dst)
if (unlikely(!src->provider))
return -EINVAL;

- dev = src->provider->dev;
+ dev = src->provider->dev->parent;

switch (node->ep) {
case 0:
@@ -97,7 +100,7 @@ int mtk_emi_icc_probe(struct platform_device *pdev)
if (!data)
return -ENOMEM;

- provider->dev = pdev->dev.parent;
+ provider->dev = dev;
provider->set = mtk_emi_icc_set;
provider->aggregate = mtk_emi_icc_aggregate;
provider->xlate = of_icc_xlate_onecell;
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index 5fb7bb7a84f41..fecca5c32e8a2 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -1022,7 +1022,12 @@ static int wait_on_sem(struct amd_iommu *iommu, u64 data)
{
int i = 0;

- while (*iommu->cmd_sem != data && i < LOOP_TIMEOUT) {
+ /*
+ * cmd_sem holds a monotonically non-decreasing completion sequence
+ * number.
+ */
+ while ((__s64)(READ_ONCE(*iommu->cmd_sem) - data) < 0 &&
+ i < LOOP_TIMEOUT) {
udelay(1);
i += 1;
}
@@ -1267,14 +1272,13 @@ static int iommu_completion_wait(struct amd_iommu *iommu)
raw_spin_lock_irqsave(&iommu->lock, flags);

ret = __iommu_queue_command_sync(iommu, &cmd, false);
+ raw_spin_unlock_irqrestore(&iommu->lock, flags);
+
if (ret)
- goto out_unlock;
+ return ret;

ret = wait_on_sem(iommu, data);

-out_unlock:
- raw_spin_unlock_irqrestore(&iommu->lock, flags);
-
return ret;
}

@@ -2931,13 +2935,18 @@ static void iommu_flush_irt_and_complete(struct amd_iommu *iommu, u16 devid)
raw_spin_lock_irqsave(&iommu->lock, flags);
ret = __iommu_queue_command_sync(iommu, &cmd, true);
if (ret)
- goto out;
+ goto out_err;
ret = __iommu_queue_command_sync(iommu, &cmd2, false);
if (ret)
- goto out;
+ goto out_err;
+ raw_spin_unlock_irqrestore(&iommu->lock, flags);
+
wait_on_sem(iommu, data);
-out:
+ return;
+
+out_err:
raw_spin_unlock_irqrestore(&iommu->lock, flags);
+ return;
}

static void set_dte_irq_entry(struct amd_iommu *iommu, u16 devid,
diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 560a670ee7911..5d11e5177a3c9 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -465,20 +465,26 @@ static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
*/
static void arm_smmu_cmdq_shared_lock(struct arm_smmu_cmdq *cmdq)
{
- int val;
-
/*
- * We can try to avoid the cmpxchg() loop by simply incrementing the
- * lock counter. When held in exclusive state, the lock counter is set
- * to INT_MIN so these increments won't hurt as the value will remain
- * negative.
+ * When held in exclusive state, the lock counter is set to INT_MIN
+ * so these increments won't hurt as the value will remain negative.
+ * The increment will also signal the exclusive locker that there are
+ * shared waiters.
*/
if (atomic_fetch_inc_relaxed(&cmdq->lock) >= 0)
return;

- do {
- val = atomic_cond_read_relaxed(&cmdq->lock, VAL >= 0);
- } while (atomic_cmpxchg_relaxed(&cmdq->lock, val, val + 1) != val);
+ /*
+ * Someone else is holding the lock in exclusive state, so wait
+ * for them to finish. Since we already incremented the lock counter,
+ * no exclusive lock can be acquired until we finish. We don't need
+ * the return value since we only care that the exclusive lock is
+ * released (i.e. the lock counter is non-negative).
+ * Once the exclusive locker releases the lock, the sign bit will
+ * be cleared and our increment will make the lock counter positive,
+ * allowing us to proceed.
+ */
+ atomic_cond_read_relaxed(&cmdq->lock, VAL > 0);
}

static void arm_smmu_cmdq_shared_unlock(struct arm_smmu_cmdq *cmdq)
@@ -505,9 +511,14 @@ static bool arm_smmu_cmdq_shared_tryunlock(struct arm_smmu_cmdq *cmdq)
__ret; \
})

+/*
+ * Only clear the sign bit when releasing the exclusive lock this will
+ * allow any shared_lock() waiters to proceed without the possibility
+ * of entering the exclusive lock in a tight loop.
+ */
#define arm_smmu_cmdq_exclusive_unlock_irqrestore(cmdq, flags) \
({ \
- atomic_set_release(&cmdq->lock, 0); \
+ atomic_fetch_andnot_release(INT_MIN, &cmdq->lock); \
local_irq_restore(flags); \
})

diff --git a/drivers/iommu/intel/Makefile b/drivers/iommu/intel/Makefile
index c8beb0281559f..d3bb0798092df 100644
--- a/drivers/iommu/intel/Makefile
+++ b/drivers/iommu/intel/Makefile
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
obj-$(CONFIG_DMAR_TABLE) += dmar.o
-obj-$(CONFIG_INTEL_IOMMU) += iommu.o pasid.o nested.o cache.o
+obj-$(CONFIG_INTEL_IOMMU) += iommu.o pasid.o nested.o cache.o prq.o
obj-$(CONFIG_DMAR_TABLE) += trace.o cap_audit.o
obj-$(CONFIG_DMAR_PERF) += perf.o
obj-$(CONFIG_INTEL_IOMMU_DEBUGFS) += debugfs.o
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index c799cc67db34e..d4f852f712aa8 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1440,12 +1440,10 @@ static void free_dmar_iommu(struct intel_iommu *iommu)
/* free context mapping */
free_context_table(iommu);

-#ifdef CONFIG_INTEL_IOMMU_SVM
if (pasid_supported(iommu)) {
if (ecap_prs(iommu->ecap))
- intel_svm_finish_prq(iommu);
+ intel_iommu_finish_prq(iommu);
}
-#endif
}

/*
@@ -2386,19 +2384,18 @@ static int __init init_dmars(void)

iommu_flush_write_buffer(iommu);

-#ifdef CONFIG_INTEL_IOMMU_SVM
if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) {
/*
* Call dmar_alloc_hwirq() with dmar_global_lock held,
* could cause possible lock race condition.
*/
up_write(&dmar_global_lock);
- ret = intel_svm_enable_prq(iommu);
+ ret = intel_iommu_enable_prq(iommu);
down_write(&dmar_global_lock);
if (ret)
goto free_iommu;
}
-#endif
+
ret = dmar_set_interrupt(iommu);
if (ret)
goto free_iommu;
@@ -2818,13 +2815,12 @@ static int intel_iommu_add(struct dmar_drhd_unit *dmaru)
intel_iommu_init_qi(iommu);
iommu_flush_write_buffer(iommu);

-#ifdef CONFIG_INTEL_IOMMU_SVM
if (pasid_supported(iommu) && ecap_prs(iommu->ecap)) {
- ret = intel_svm_enable_prq(iommu);
+ ret = intel_iommu_enable_prq(iommu);
if (ret)
goto disable_iommu;
}
-#endif
+
ret = dmar_set_interrupt(iommu);
if (ret)
goto disable_iommu;
@@ -4337,7 +4333,6 @@ static void intel_iommu_remove_dev_pasid(struct device *dev, ioasid_t pasid,
kfree(dev_pasid);
}
intel_pasid_tear_down_entry(iommu, dev, pasid, false);
- intel_drain_pasid_prq(dev, pasid);
}

static int intel_iommu_set_dev_pasid(struct iommu_domain *domain,
@@ -4665,9 +4660,7 @@ const struct iommu_ops intel_iommu_ops = {
.def_domain_type = device_def_domain_type,
.remove_dev_pasid = intel_iommu_remove_dev_pasid,
.pgsize_bitmap = SZ_4K,
-#ifdef CONFIG_INTEL_IOMMU_SVM
- .page_response = intel_svm_page_response,
-#endif
+ .page_response = intel_iommu_page_response,
.default_domain_ops = &(const struct iommu_domain_ops) {
.attach_dev = intel_iommu_attach_device,
.set_dev_pasid = intel_iommu_set_dev_pasid,
diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
index 5b5f57d694afd..b33d8888d7ebd 100644
--- a/drivers/iommu/intel/iommu.h
+++ b/drivers/iommu/intel/iommu.h
@@ -734,12 +734,10 @@ struct intel_iommu {

struct iommu_flush flush;
#endif
-#ifdef CONFIG_INTEL_IOMMU_SVM
struct page_req_dsc *prq;
unsigned char prq_name[16]; /* Name for PRQ interrupt */
unsigned long prq_seq_number;
struct completion prq_complete;
-#endif
struct iopf_queue *iopf_queue;
unsigned char iopfq_name[16];
/* Synchronization between fault report and iommu device release. */
@@ -1283,18 +1281,18 @@ void intel_context_flush_present(struct device_domain_info *info,
struct context_entry *context,
u16 did, bool affect_domains);

+int intel_iommu_enable_prq(struct intel_iommu *iommu);
+int intel_iommu_finish_prq(struct intel_iommu *iommu);
+void intel_iommu_page_response(struct device *dev, struct iopf_fault *evt,
+ struct iommu_page_response *msg);
+void intel_iommu_drain_pasid_prq(struct device *dev, u32 pasid);
+
#ifdef CONFIG_INTEL_IOMMU_SVM
void intel_svm_check(struct intel_iommu *iommu);
-int intel_svm_enable_prq(struct intel_iommu *iommu);
-int intel_svm_finish_prq(struct intel_iommu *iommu);
-void intel_svm_page_response(struct device *dev, struct iopf_fault *evt,
- struct iommu_page_response *msg);
struct iommu_domain *intel_svm_domain_alloc(struct device *dev,
struct mm_struct *mm);
-void intel_drain_pasid_prq(struct device *dev, u32 pasid);
#else
static inline void intel_svm_check(struct intel_iommu *iommu) {}
-static inline void intel_drain_pasid_prq(struct device *dev, u32 pasid) {}
static inline struct iommu_domain *intel_svm_domain_alloc(struct device *dev,
struct mm_struct *mm)
{
diff --git a/drivers/iommu/intel/pasid.c b/drivers/iommu/intel/pasid.c
index 2e5fa0a232999..90fdfa5f7d1d6 100644
--- a/drivers/iommu/intel/pasid.c
+++ b/drivers/iommu/intel/pasid.c
@@ -152,6 +152,9 @@ static struct pasid_entry *intel_pasid_get_entry(struct device *dev, u32 pasid)
if (!entries)
return NULL;

+ if (!ecap_coherent(info->iommu->ecap))
+ clflush_cache_range(entries, VTD_PAGE_SIZE);
+
/*
* The pasid directory table entry won't be freed after
* allocation. No worry about the race with free and
@@ -164,10 +167,8 @@ static struct pasid_entry *intel_pasid_get_entry(struct device *dev, u32 pasid)
iommu_free_page(entries);
goto retry;
}
- if (!ecap_coherent(info->iommu->ecap)) {
- clflush_cache_range(entries, VTD_PAGE_SIZE);
+ if (!ecap_coherent(info->iommu->ecap))
clflush_cache_range(&dir[dir_index].val, sizeof(*dir));
- }
}

return &entries[index];
@@ -217,7 +218,7 @@ devtlb_invalidation_with_pasid(struct intel_iommu *iommu,
if (!info || !info->ats_enabled)
return;

- if (pci_dev_is_disconnected(to_pci_dev(dev)))
+ if (!pci_device_is_present(to_pci_dev(dev)))
return;

sid = info->bus << 8 | info->devfn;
@@ -251,7 +252,7 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,

did = pasid_get_domain_id(pte);
pgtt = pasid_pte_get_pgtt(pte);
- intel_pasid_clear_entry(dev, pasid, fault_ignore);
+ pasid_clear_present(pte);
spin_unlock(&iommu->lock);

if (!ecap_coherent(iommu->ecap))
@@ -265,6 +266,12 @@ void intel_pasid_tear_down_entry(struct intel_iommu *iommu, struct device *dev,
iommu->flush.flush_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH);

devtlb_invalidation_with_pasid(iommu, dev, pasid);
+ intel_pasid_clear_entry(dev, pasid, fault_ignore);
+ if (!ecap_coherent(iommu->ecap))
+ clflush_cache_range(pte, sizeof(*pte));
+
+ if (!fault_ignore)
+ intel_iommu_drain_pasid_prq(dev, pasid);
}

/*
diff --git a/drivers/iommu/intel/pasid.h b/drivers/iommu/intel/pasid.h
index dde6d3ba5ae0f..55cad7bfa294e 100644
--- a/drivers/iommu/intel/pasid.h
+++ b/drivers/iommu/intel/pasid.h
@@ -235,9 +235,23 @@ static inline void pasid_set_wpe(struct pasid_entry *pe)
*/
static inline void pasid_set_present(struct pasid_entry *pe)
{
+ dma_wmb();
pasid_set_bits(&pe->val[0], 1 << 0, 1);
}

+/*
+ * Clear the Present (P) bit (bit 0) of a scalable-mode PASID table entry.
+ * This initiates the transition of the entry's ownership from hardware
+ * to software. The caller is responsible for fulfilling the invalidation
+ * handshake recommended by the VT-d spec, Section 6.5.3.3 (Guidance to
+ * Software for Invalidations).
+ */
+static inline void pasid_clear_present(struct pasid_entry *pe)
+{
+ pasid_set_bits(&pe->val[0], 1 << 0, 0);
+ dma_wmb();
+}
+
/*
* Setup Page Walk Snoop bit (Bit 87) of a scalable mode PASID
* entry.
diff --git a/drivers/iommu/intel/prq.c b/drivers/iommu/intel/prq.c
new file mode 100644
index 0000000000000..853d1cbb635fd
--- /dev/null
+++ b/drivers/iommu/intel/prq.c
@@ -0,0 +1,402 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2015 Intel Corporation
+ *
+ * Originally split from drivers/iommu/intel/svm.c
+ */
+
+#include <linux/pci.h>
+#include <linux/pci-ats.h>
+
+#include "iommu.h"
+#include "pasid.h"
+#include "../iommu-pages.h"
+#include "trace.h"
+
+/* Page request queue descriptor */
+struct page_req_dsc {
+ union {
+ struct {
+ u64 type:8;
+ u64 pasid_present:1;
+ u64 rsvd:7;
+ u64 rid:16;
+ u64 pasid:20;
+ u64 exe_req:1;
+ u64 pm_req:1;
+ u64 rsvd2:10;
+ };
+ u64 qw_0;
+ };
+ union {
+ struct {
+ u64 rd_req:1;
+ u64 wr_req:1;
+ u64 lpig:1;
+ u64 prg_index:9;
+ u64 addr:52;
+ };
+ u64 qw_1;
+ };
+ u64 qw_2;
+ u64 qw_3;
+};
+
+/**
+ * intel_iommu_drain_pasid_prq - Drain page requests and responses for a pasid
+ * @dev: target device
+ * @pasid: pasid for draining
+ *
+ * Drain all pending page requests and responses related to @pasid in both
+ * software and hardware. This is supposed to be called after the device
+ * driver has stopped DMA, the pasid entry has been cleared, and both IOTLB
+ * and DevTLB have been invalidated.
+ *
+ * It waits until all pending page requests for @pasid in the page fault
+ * queue are completed by the prq handling thread. Then follow the steps
+ * described in VT-d spec CH7.10 to drain all page requests and page
+ * responses pending in the hardware.
+ */
+void intel_iommu_drain_pasid_prq(struct device *dev, u32 pasid)
+{
+ struct device_domain_info *info;
+ struct dmar_domain *domain;
+ struct intel_iommu *iommu;
+ struct qi_desc desc[3];
+ int head, tail;
+ u16 sid, did;
+
+ info = dev_iommu_priv_get(dev);
+ if (!info->pri_enabled)
+ return;
+
+ iommu = info->iommu;
+ domain = info->domain;
+ sid = PCI_DEVID(info->bus, info->devfn);
+ did = domain ? domain_id_iommu(domain, iommu) : FLPT_DEFAULT_DID;
+
+ /*
+ * Check and wait until all pending page requests in the queue are
+ * handled by the prq handling thread.
+ */
+prq_retry:
+ reinit_completion(&iommu->prq_complete);
+ tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
+ head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
+ while (head != tail) {
+ struct page_req_dsc *req;
+
+ req = &iommu->prq[head / sizeof(*req)];
+ if (!req->pasid_present || req->pasid != pasid) {
+ head = (head + sizeof(*req)) & PRQ_RING_MASK;
+ continue;
+ }
+
+ wait_for_completion(&iommu->prq_complete);
+ goto prq_retry;
+ }
+
+ iopf_queue_flush_dev(dev);
+
+ /*
+ * Perform steps described in VT-d spec CH7.10 to drain page
+ * requests and responses in hardware.
+ */
+ memset(desc, 0, sizeof(desc));
+ desc[0].qw0 = QI_IWD_STATUS_DATA(QI_DONE) |
+ QI_IWD_FENCE |
+ QI_IWD_TYPE;
+ if (pasid == IOMMU_NO_PASID) {
+ qi_desc_iotlb(iommu, did, 0, 0, DMA_TLB_DSI_FLUSH, &desc[1]);
+ qi_desc_dev_iotlb(sid, info->pfsid, info->ats_qdep, 0,
+ MAX_AGAW_PFN_WIDTH, &desc[2]);
+ } else {
+ qi_desc_piotlb(did, pasid, 0, -1, 0, &desc[1]);
+ qi_desc_dev_iotlb_pasid(sid, info->pfsid, pasid, info->ats_qdep,
+ 0, MAX_AGAW_PFN_WIDTH, &desc[2]);
+ }
+qi_retry:
+ reinit_completion(&iommu->prq_complete);
+ qi_submit_sync(iommu, desc, 3, QI_OPT_WAIT_DRAIN);
+ if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) {
+ wait_for_completion(&iommu->prq_complete);
+ goto qi_retry;
+ }
+}
+
+static bool is_canonical_address(u64 addr)
+{
+ int shift = 64 - (__VIRTUAL_MASK_SHIFT + 1);
+ long saddr = (long)addr;
+
+ return (((saddr << shift) >> shift) == saddr);
+}
+
+static void handle_bad_prq_event(struct intel_iommu *iommu,
+ struct page_req_dsc *req, int result)
+{
+ struct qi_desc desc = { };
+
+ pr_err("%s: Invalid page request: %08llx %08llx\n",
+ iommu->name, ((unsigned long long *)req)[0],
+ ((unsigned long long *)req)[1]);
+
+ if (!req->lpig)
+ return;
+
+ desc.qw0 = QI_PGRP_PASID(req->pasid) |
+ QI_PGRP_DID(req->rid) |
+ QI_PGRP_PASID_P(req->pasid_present) |
+ QI_PGRP_RESP_CODE(result) |
+ QI_PGRP_RESP_TYPE;
+ desc.qw1 = QI_PGRP_IDX(req->prg_index) |
+ QI_PGRP_LPIG(req->lpig);
+
+ qi_submit_sync(iommu, &desc, 1, 0);
+}
+
+static int prq_to_iommu_prot(struct page_req_dsc *req)
+{
+ int prot = 0;
+
+ if (req->rd_req)
+ prot |= IOMMU_FAULT_PERM_READ;
+ if (req->wr_req)
+ prot |= IOMMU_FAULT_PERM_WRITE;
+ if (req->exe_req)
+ prot |= IOMMU_FAULT_PERM_EXEC;
+ if (req->pm_req)
+ prot |= IOMMU_FAULT_PERM_PRIV;
+
+ return prot;
+}
+
+static void intel_prq_report(struct intel_iommu *iommu, struct device *dev,
+ struct page_req_dsc *desc)
+{
+ struct iopf_fault event = { };
+
+ /* Fill in event data for device specific processing */
+ event.fault.type = IOMMU_FAULT_PAGE_REQ;
+ event.fault.prm.addr = (u64)desc->addr << VTD_PAGE_SHIFT;
+ event.fault.prm.pasid = desc->pasid;
+ event.fault.prm.grpid = desc->prg_index;
+ event.fault.prm.perm = prq_to_iommu_prot(desc);
+
+ if (desc->lpig)
+ event.fault.prm.flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
+ if (desc->pasid_present) {
+ event.fault.prm.flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
+ event.fault.prm.flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
+ }
+
+ iommu_report_device_fault(dev, &event);
+}
+
+static irqreturn_t prq_event_thread(int irq, void *d)
+{
+ struct intel_iommu *iommu = d;
+ struct page_req_dsc *req;
+ int head, tail, handled;
+ struct device *dev;
+ u64 address;
+
+ /*
+ * Clear PPR bit before reading head/tail registers, to ensure that
+ * we get a new interrupt if needed.
+ */
+ writel(DMA_PRS_PPR, iommu->reg + DMAR_PRS_REG);
+
+ tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
+ head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
+ handled = (head != tail);
+ while (head != tail) {
+ req = &iommu->prq[head / sizeof(*req)];
+ address = (u64)req->addr << VTD_PAGE_SHIFT;
+
+ if (unlikely(!req->pasid_present)) {
+ pr_err("IOMMU: %s: Page request without PASID\n",
+ iommu->name);
+bad_req:
+ handle_bad_prq_event(iommu, req, QI_RESP_INVALID);
+ goto prq_advance;
+ }
+
+ if (unlikely(!is_canonical_address(address))) {
+ pr_err("IOMMU: %s: Address is not canonical\n",
+ iommu->name);
+ goto bad_req;
+ }
+
+ if (unlikely(req->pm_req && (req->rd_req | req->wr_req))) {
+ pr_err("IOMMU: %s: Page request in Privilege Mode\n",
+ iommu->name);
+ goto bad_req;
+ }
+
+ if (unlikely(req->exe_req && req->rd_req)) {
+ pr_err("IOMMU: %s: Execution request not supported\n",
+ iommu->name);
+ goto bad_req;
+ }
+
+ /* Drop Stop Marker message. No need for a response. */
+ if (unlikely(req->lpig && !req->rd_req && !req->wr_req))
+ goto prq_advance;
+
+ /*
+ * If prq is to be handled outside iommu driver via receiver of
+ * the fault notifiers, we skip the page response here.
+ */
+ mutex_lock(&iommu->iopf_lock);
+ dev = device_rbtree_find(iommu, req->rid);
+ if (!dev) {
+ mutex_unlock(&iommu->iopf_lock);
+ goto bad_req;
+ }
+
+ intel_prq_report(iommu, dev, req);
+ trace_prq_report(iommu, dev, req->qw_0, req->qw_1,
+ req->qw_2, req->qw_3,
+ iommu->prq_seq_number++);
+ mutex_unlock(&iommu->iopf_lock);
+prq_advance:
+ head = (head + sizeof(*req)) & PRQ_RING_MASK;
+ }
+
+ dmar_writeq(iommu->reg + DMAR_PQH_REG, tail);
+
+ /*
+ * Clear the page request overflow bit and wake up all threads that
+ * are waiting for the completion of this handling.
+ */
+ if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) {
+ pr_info_ratelimited("IOMMU: %s: PRQ overflow detected\n",
+ iommu->name);
+ head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
+ tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
+ if (head == tail) {
+ iopf_queue_discard_partial(iommu->iopf_queue);
+ writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG);
+ pr_info_ratelimited("IOMMU: %s: PRQ overflow cleared",
+ iommu->name);
+ }
+ }
+
+ if (!completion_done(&iommu->prq_complete))
+ complete(&iommu->prq_complete);
+
+ return IRQ_RETVAL(handled);
+}
+
+int intel_iommu_enable_prq(struct intel_iommu *iommu)
+{
+ struct iopf_queue *iopfq;
+ int irq, ret;
+
+ iommu->prq = iommu_alloc_pages_node(iommu->node, GFP_KERNEL, PRQ_ORDER);
+ if (!iommu->prq) {
+ pr_warn("IOMMU: %s: Failed to allocate page request queue\n",
+ iommu->name);
+ return -ENOMEM;
+ }
+
+ irq = dmar_alloc_hwirq(IOMMU_IRQ_ID_OFFSET_PRQ + iommu->seq_id, iommu->node, iommu);
+ if (irq <= 0) {
+ pr_err("IOMMU: %s: Failed to create IRQ vector for page request queue\n",
+ iommu->name);
+ ret = -EINVAL;
+ goto free_prq;
+ }
+ iommu->pr_irq = irq;
+
+ snprintf(iommu->iopfq_name, sizeof(iommu->iopfq_name),
+ "dmar%d-iopfq", iommu->seq_id);
+ iopfq = iopf_queue_alloc(iommu->iopfq_name);
+ if (!iopfq) {
+ pr_err("IOMMU: %s: Failed to allocate iopf queue\n", iommu->name);
+ ret = -ENOMEM;
+ goto free_hwirq;
+ }
+ iommu->iopf_queue = iopfq;
+
+ snprintf(iommu->prq_name, sizeof(iommu->prq_name), "dmar%d-prq", iommu->seq_id);
+
+ ret = request_threaded_irq(irq, NULL, prq_event_thread, IRQF_ONESHOT,
+ iommu->prq_name, iommu);
+ if (ret) {
+ pr_err("IOMMU: %s: Failed to request IRQ for page request queue\n",
+ iommu->name);
+ goto free_iopfq;
+ }
+ dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL);
+ dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL);
+ dmar_writeq(iommu->reg + DMAR_PQA_REG, virt_to_phys(iommu->prq) | PRQ_ORDER);
+
+ init_completion(&iommu->prq_complete);
+
+ return 0;
+
+free_iopfq:
+ iopf_queue_free(iommu->iopf_queue);
+ iommu->iopf_queue = NULL;
+free_hwirq:
+ dmar_free_hwirq(irq);
+ iommu->pr_irq = 0;
+free_prq:
+ iommu_free_pages(iommu->prq, PRQ_ORDER);
+ iommu->prq = NULL;
+
+ return ret;
+}
+
+int intel_iommu_finish_prq(struct intel_iommu *iommu)
+{
+ dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL);
+ dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL);
+ dmar_writeq(iommu->reg + DMAR_PQA_REG, 0ULL);
+
+ if (iommu->pr_irq) {
+ free_irq(iommu->pr_irq, iommu);
+ dmar_free_hwirq(iommu->pr_irq);
+ iommu->pr_irq = 0;
+ }
+
+ if (iommu->iopf_queue) {
+ iopf_queue_free(iommu->iopf_queue);
+ iommu->iopf_queue = NULL;
+ }
+
+ iommu_free_pages(iommu->prq, PRQ_ORDER);
+ iommu->prq = NULL;
+
+ return 0;
+}
+
+void intel_iommu_page_response(struct device *dev, struct iopf_fault *evt,
+ struct iommu_page_response *msg)
+{
+ struct device_domain_info *info = dev_iommu_priv_get(dev);
+ struct intel_iommu *iommu = info->iommu;
+ u8 bus = info->bus, devfn = info->devfn;
+ struct iommu_fault_page_request *prm;
+ struct qi_desc desc;
+ bool pasid_present;
+ bool last_page;
+ u16 sid;
+
+ prm = &evt->fault.prm;
+ sid = PCI_DEVID(bus, devfn);
+ pasid_present = prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
+ last_page = prm->flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
+
+ desc.qw0 = QI_PGRP_PASID(prm->pasid) | QI_PGRP_DID(sid) |
+ QI_PGRP_PASID_P(pasid_present) |
+ QI_PGRP_RESP_CODE(msg->code) |
+ QI_PGRP_RESP_TYPE;
+ desc.qw1 = QI_PGRP_IDX(prm->grpid) | QI_PGRP_LPIG(last_page);
+ desc.qw2 = 0;
+ desc.qw3 = 0;
+
+ qi_submit_sync(iommu, &desc, 1, 0);
+}
diff --git a/drivers/iommu/intel/svm.c b/drivers/iommu/intel/svm.c
index 078d1e32a24ee..3cc43a958b4dc 100644
--- a/drivers/iommu/intel/svm.c
+++ b/drivers/iommu/intel/svm.c
@@ -25,92 +25,6 @@
#include "../iommu-pages.h"
#include "trace.h"

-static irqreturn_t prq_event_thread(int irq, void *d);
-
-int intel_svm_enable_prq(struct intel_iommu *iommu)
-{
- struct iopf_queue *iopfq;
- int irq, ret;
-
- iommu->prq = iommu_alloc_pages_node(iommu->node, GFP_KERNEL, PRQ_ORDER);
- if (!iommu->prq) {
- pr_warn("IOMMU: %s: Failed to allocate page request queue\n",
- iommu->name);
- return -ENOMEM;
- }
-
- irq = dmar_alloc_hwirq(IOMMU_IRQ_ID_OFFSET_PRQ + iommu->seq_id, iommu->node, iommu);
- if (irq <= 0) {
- pr_err("IOMMU: %s: Failed to create IRQ vector for page request queue\n",
- iommu->name);
- ret = -EINVAL;
- goto free_prq;
- }
- iommu->pr_irq = irq;
-
- snprintf(iommu->iopfq_name, sizeof(iommu->iopfq_name),
- "dmar%d-iopfq", iommu->seq_id);
- iopfq = iopf_queue_alloc(iommu->iopfq_name);
- if (!iopfq) {
- pr_err("IOMMU: %s: Failed to allocate iopf queue\n", iommu->name);
- ret = -ENOMEM;
- goto free_hwirq;
- }
- iommu->iopf_queue = iopfq;
-
- snprintf(iommu->prq_name, sizeof(iommu->prq_name), "dmar%d-prq", iommu->seq_id);
-
- ret = request_threaded_irq(irq, NULL, prq_event_thread, IRQF_ONESHOT,
- iommu->prq_name, iommu);
- if (ret) {
- pr_err("IOMMU: %s: Failed to request IRQ for page request queue\n",
- iommu->name);
- goto free_iopfq;
- }
- dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL);
- dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL);
- dmar_writeq(iommu->reg + DMAR_PQA_REG, virt_to_phys(iommu->prq) | PRQ_ORDER);
-
- init_completion(&iommu->prq_complete);
-
- return 0;
-
-free_iopfq:
- iopf_queue_free(iommu->iopf_queue);
- iommu->iopf_queue = NULL;
-free_hwirq:
- dmar_free_hwirq(irq);
- iommu->pr_irq = 0;
-free_prq:
- iommu_free_pages(iommu->prq, PRQ_ORDER);
- iommu->prq = NULL;
-
- return ret;
-}
-
-int intel_svm_finish_prq(struct intel_iommu *iommu)
-{
- dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL);
- dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL);
- dmar_writeq(iommu->reg + DMAR_PQA_REG, 0ULL);
-
- if (iommu->pr_irq) {
- free_irq(iommu->pr_irq, iommu);
- dmar_free_hwirq(iommu->pr_irq);
- iommu->pr_irq = 0;
- }
-
- if (iommu->iopf_queue) {
- iopf_queue_free(iommu->iopf_queue);
- iommu->iopf_queue = NULL;
- }
-
- iommu_free_pages(iommu->prq, PRQ_ORDER);
- iommu->prq = NULL;
-
- return 0;
-}
-
void intel_svm_check(struct intel_iommu *iommu)
{
if (!pasid_supported(iommu))
@@ -240,317 +154,6 @@ static int intel_svm_set_dev_pasid(struct iommu_domain *domain,
return ret;
}

-/* Page request queue descriptor */
-struct page_req_dsc {
- union {
- struct {
- u64 type:8;
- u64 pasid_present:1;
- u64 rsvd:7;
- u64 rid:16;
- u64 pasid:20;
- u64 exe_req:1;
- u64 pm_req:1;
- u64 rsvd2:10;
- };
- u64 qw_0;
- };
- union {
- struct {
- u64 rd_req:1;
- u64 wr_req:1;
- u64 lpig:1;
- u64 prg_index:9;
- u64 addr:52;
- };
- u64 qw_1;
- };
- u64 qw_2;
- u64 qw_3;
-};
-
-static bool is_canonical_address(u64 addr)
-{
- int shift = 64 - (__VIRTUAL_MASK_SHIFT + 1);
- long saddr = (long) addr;
-
- return (((saddr << shift) >> shift) == saddr);
-}
-
-/**
- * intel_drain_pasid_prq - Drain page requests and responses for a pasid
- * @dev: target device
- * @pasid: pasid for draining
- *
- * Drain all pending page requests and responses related to @pasid in both
- * software and hardware. This is supposed to be called after the device
- * driver has stopped DMA, the pasid entry has been cleared, and both IOTLB
- * and DevTLB have been invalidated.
- *
- * It waits until all pending page requests for @pasid in the page fault
- * queue are completed by the prq handling thread. Then follow the steps
- * described in VT-d spec CH7.10 to drain all page requests and page
- * responses pending in the hardware.
- */
-void intel_drain_pasid_prq(struct device *dev, u32 pasid)
-{
- struct device_domain_info *info;
- struct dmar_domain *domain;
- struct intel_iommu *iommu;
- struct qi_desc desc[3];
- struct pci_dev *pdev;
- int head, tail;
- u16 sid, did;
- int qdep;
-
- info = dev_iommu_priv_get(dev);
- if (WARN_ON(!info || !dev_is_pci(dev)))
- return;
-
- if (!info->pri_enabled)
- return;
-
- iommu = info->iommu;
- domain = info->domain;
- pdev = to_pci_dev(dev);
- sid = PCI_DEVID(info->bus, info->devfn);
- did = domain ? domain_id_iommu(domain, iommu) : FLPT_DEFAULT_DID;
- qdep = pci_ats_queue_depth(pdev);
-
- /*
- * Check and wait until all pending page requests in the queue are
- * handled by the prq handling thread.
- */
-prq_retry:
- reinit_completion(&iommu->prq_complete);
- tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
- head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
- while (head != tail) {
- struct page_req_dsc *req;
-
- req = &iommu->prq[head / sizeof(*req)];
- if (!req->pasid_present || req->pasid != pasid) {
- head = (head + sizeof(*req)) & PRQ_RING_MASK;
- continue;
- }
-
- wait_for_completion(&iommu->prq_complete);
- goto prq_retry;
- }
-
- iopf_queue_flush_dev(dev);
-
- /*
- * Perform steps described in VT-d spec CH7.10 to drain page
- * requests and responses in hardware.
- */
- memset(desc, 0, sizeof(desc));
- desc[0].qw0 = QI_IWD_STATUS_DATA(QI_DONE) |
- QI_IWD_FENCE |
- QI_IWD_TYPE;
- desc[1].qw0 = QI_EIOTLB_PASID(pasid) |
- QI_EIOTLB_DID(did) |
- QI_EIOTLB_GRAN(QI_GRAN_NONG_PASID) |
- QI_EIOTLB_TYPE;
- desc[2].qw0 = QI_DEV_EIOTLB_PASID(pasid) |
- QI_DEV_EIOTLB_SID(sid) |
- QI_DEV_EIOTLB_QDEP(qdep) |
- QI_DEIOTLB_TYPE |
- QI_DEV_IOTLB_PFSID(info->pfsid);
-qi_retry:
- reinit_completion(&iommu->prq_complete);
- qi_submit_sync(iommu, desc, 3, QI_OPT_WAIT_DRAIN);
- if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) {
- wait_for_completion(&iommu->prq_complete);
- goto qi_retry;
- }
-}
-
-static int prq_to_iommu_prot(struct page_req_dsc *req)
-{
- int prot = 0;
-
- if (req->rd_req)
- prot |= IOMMU_FAULT_PERM_READ;
- if (req->wr_req)
- prot |= IOMMU_FAULT_PERM_WRITE;
- if (req->exe_req)
- prot |= IOMMU_FAULT_PERM_EXEC;
- if (req->pm_req)
- prot |= IOMMU_FAULT_PERM_PRIV;
-
- return prot;
-}
-
-static void intel_svm_prq_report(struct intel_iommu *iommu, struct device *dev,
- struct page_req_dsc *desc)
-{
- struct iopf_fault event = { };
-
- /* Fill in event data for device specific processing */
- event.fault.type = IOMMU_FAULT_PAGE_REQ;
- event.fault.prm.addr = (u64)desc->addr << VTD_PAGE_SHIFT;
- event.fault.prm.pasid = desc->pasid;
- event.fault.prm.grpid = desc->prg_index;
- event.fault.prm.perm = prq_to_iommu_prot(desc);
-
- if (desc->lpig)
- event.fault.prm.flags |= IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
- if (desc->pasid_present) {
- event.fault.prm.flags |= IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
- event.fault.prm.flags |= IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
- }
-
- iommu_report_device_fault(dev, &event);
-}
-
-static void handle_bad_prq_event(struct intel_iommu *iommu,
- struct page_req_dsc *req, int result)
-{
- struct qi_desc desc = { };
-
- pr_err("%s: Invalid page request: %08llx %08llx\n",
- iommu->name, ((unsigned long long *)req)[0],
- ((unsigned long long *)req)[1]);
-
- if (!req->lpig)
- return;
-
- desc.qw0 = QI_PGRP_PASID(req->pasid) |
- QI_PGRP_DID(req->rid) |
- QI_PGRP_PASID_P(req->pasid_present) |
- QI_PGRP_RESP_CODE(result) |
- QI_PGRP_RESP_TYPE;
- desc.qw1 = QI_PGRP_IDX(req->prg_index) |
- QI_PGRP_LPIG(req->lpig);
-
- qi_submit_sync(iommu, &desc, 1, 0);
-}
-
-static irqreturn_t prq_event_thread(int irq, void *d)
-{
- struct intel_iommu *iommu = d;
- struct page_req_dsc *req;
- int head, tail, handled;
- struct device *dev;
- u64 address;
-
- /*
- * Clear PPR bit before reading head/tail registers, to ensure that
- * we get a new interrupt if needed.
- */
- writel(DMA_PRS_PPR, iommu->reg + DMAR_PRS_REG);
-
- tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
- head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
- handled = (head != tail);
- while (head != tail) {
- req = &iommu->prq[head / sizeof(*req)];
- address = (u64)req->addr << VTD_PAGE_SHIFT;
-
- if (unlikely(!req->pasid_present)) {
- pr_err("IOMMU: %s: Page request without PASID\n",
- iommu->name);
-bad_req:
- handle_bad_prq_event(iommu, req, QI_RESP_INVALID);
- goto prq_advance;
- }
-
- if (unlikely(!is_canonical_address(address))) {
- pr_err("IOMMU: %s: Address is not canonical\n",
- iommu->name);
- goto bad_req;
- }
-
- if (unlikely(req->pm_req && (req->rd_req | req->wr_req))) {
- pr_err("IOMMU: %s: Page request in Privilege Mode\n",
- iommu->name);
- goto bad_req;
- }
-
- if (unlikely(req->exe_req && req->rd_req)) {
- pr_err("IOMMU: %s: Execution request not supported\n",
- iommu->name);
- goto bad_req;
- }
-
- /* Drop Stop Marker message. No need for a response. */
- if (unlikely(req->lpig && !req->rd_req && !req->wr_req))
- goto prq_advance;
-
- /*
- * If prq is to be handled outside iommu driver via receiver of
- * the fault notifiers, we skip the page response here.
- */
- mutex_lock(&iommu->iopf_lock);
- dev = device_rbtree_find(iommu, req->rid);
- if (!dev) {
- mutex_unlock(&iommu->iopf_lock);
- goto bad_req;
- }
-
- intel_svm_prq_report(iommu, dev, req);
- trace_prq_report(iommu, dev, req->qw_0, req->qw_1,
- req->qw_2, req->qw_3,
- iommu->prq_seq_number++);
- mutex_unlock(&iommu->iopf_lock);
-prq_advance:
- head = (head + sizeof(*req)) & PRQ_RING_MASK;
- }
-
- dmar_writeq(iommu->reg + DMAR_PQH_REG, tail);
-
- /*
- * Clear the page request overflow bit and wake up all threads that
- * are waiting for the completion of this handling.
- */
- if (readl(iommu->reg + DMAR_PRS_REG) & DMA_PRS_PRO) {
- pr_info_ratelimited("IOMMU: %s: PRQ overflow detected\n",
- iommu->name);
- head = dmar_readq(iommu->reg + DMAR_PQH_REG) & PRQ_RING_MASK;
- tail = dmar_readq(iommu->reg + DMAR_PQT_REG) & PRQ_RING_MASK;
- if (head == tail) {
- iopf_queue_discard_partial(iommu->iopf_queue);
- writel(DMA_PRS_PRO, iommu->reg + DMAR_PRS_REG);
- pr_info_ratelimited("IOMMU: %s: PRQ overflow cleared",
- iommu->name);
- }
- }
-
- if (!completion_done(&iommu->prq_complete))
- complete(&iommu->prq_complete);
-
- return IRQ_RETVAL(handled);
-}
-
-void intel_svm_page_response(struct device *dev, struct iopf_fault *evt,
- struct iommu_page_response *msg)
-{
- struct device_domain_info *info = dev_iommu_priv_get(dev);
- struct intel_iommu *iommu = info->iommu;
- u8 bus = info->bus, devfn = info->devfn;
- struct iommu_fault_page_request *prm;
- struct qi_desc desc;
- bool pasid_present;
- bool last_page;
- u16 sid;
-
- prm = &evt->fault.prm;
- sid = PCI_DEVID(bus, devfn);
- pasid_present = prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID;
- last_page = prm->flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE;
-
- desc.qw0 = QI_PGRP_PASID(prm->pasid) | QI_PGRP_DID(sid) |
- QI_PGRP_PASID_P(pasid_present) |
- QI_PGRP_RESP_CODE(msg->code) |
- QI_PGRP_RESP_TYPE;
- desc.qw1 = QI_PGRP_IDX(prm->grpid) | QI_PGRP_LPIG(last_page);
- desc.qw2 = 0;
- desc.qw3 = 0;
-
- qi_submit_sync(iommu, &desc, 1, 0);
-}
-
static void intel_svm_domain_free(struct iommu_domain *domain)
{
struct dmar_domain *dmar_domain = to_dmar_domain(domain);
diff --git a/drivers/leds/rgb/leds-qcom-lpg.c b/drivers/leds/rgb/leds-qcom-lpg.c
index 84e02867f3b43..98c60e971b48f 100644
--- a/drivers/leds/rgb/leds-qcom-lpg.c
+++ b/drivers/leds/rgb/leds-qcom-lpg.c
@@ -368,7 +368,7 @@ static int lpg_lut_store(struct lpg *lpg, struct led_pattern *pattern,
{
unsigned int idx;
u16 val;
- int i;
+ int i, ret;

idx = bitmap_find_next_zero_area(lpg->lut_bitmap, lpg->lut_size,
0, len, 0);
@@ -378,8 +378,10 @@ static int lpg_lut_store(struct lpg *lpg, struct led_pattern *pattern,
for (i = 0; i < len; i++) {
val = pattern[i].brightness;

- regmap_bulk_write(lpg->map, lpg->lut_base + LPG_LUT_REG(idx + i),
- &val, sizeof(val));
+ ret = regmap_bulk_write(lpg->map, lpg->lut_base + LPG_LUT_REG(idx + i),
+ &val, sizeof(val));
+ if (ret)
+ return ret;
}

bitmap_set(lpg->lut_bitmap, idx, len);
diff --git a/drivers/mailbox/bcm-flexrm-mailbox.c b/drivers/mailbox/bcm-flexrm-mailbox.c
index b1abc2a0c971a..b616a8eacc0ee 100644
--- a/drivers/mailbox/bcm-flexrm-mailbox.c
+++ b/drivers/mailbox/bcm-flexrm-mailbox.c
@@ -1173,14 +1173,6 @@ static int flexrm_debugfs_stats_show(struct seq_file *file, void *offset)

/* ====== FlexRM interrupt handler ===== */

-static irqreturn_t flexrm_irq_event(int irq, void *dev_id)
-{
- /* We only have MSI for completions so just wakeup IRQ thread */
- /* Ring related errors will be informed via completion descriptors */
-
- return IRQ_WAKE_THREAD;
-}
-
static irqreturn_t flexrm_irq_thread(int irq, void *dev_id)
{
flexrm_process_completions(dev_id);
@@ -1271,10 +1263,8 @@ static int flexrm_startup(struct mbox_chan *chan)
ret = -ENODEV;
goto fail_free_cmpl_memory;
}
- ret = request_threaded_irq(ring->irq,
- flexrm_irq_event,
- flexrm_irq_thread,
- 0, dev_name(ring->mbox->dev), ring);
+ ret = request_threaded_irq(ring->irq, NULL, flexrm_irq_thread,
+ IRQF_ONESHOT, dev_name(ring->mbox->dev), ring);
if (ret) {
dev_err(ring->mbox->dev,
"failed to request ring%d IRQ\n", ring->num);
diff --git a/drivers/mailbox/imx-mailbox.c b/drivers/mailbox/imx-mailbox.c
index 0657bd3d8f97b..aa9a299859d6a 100644
--- a/drivers/mailbox/imx-mailbox.c
+++ b/drivers/mailbox/imx-mailbox.c
@@ -122,6 +122,7 @@ struct imx_mu_dcfg {
u32 xRR; /* Receive Register0 */
u32 xSR[IMX_MU_xSR_MAX]; /* Status Registers */
u32 xCR[IMX_MU_xCR_MAX]; /* Control Registers */
+ bool skip_suspend_flag;
};

#define IMX_MU_xSR_GIPn(type, x) (type & IMX_MU_V2 ? BIT(x) : BIT(28 + (3 - (x))))
@@ -988,6 +989,7 @@ static const struct imx_mu_dcfg imx_mu_cfg_imx7ulp = {
.xRR = 0x40,
.xSR = {0x60, 0x60, 0x60, 0x60},
.xCR = {0x64, 0x64, 0x64, 0x64, 0x64},
+ .skip_suspend_flag = true,
};

static const struct imx_mu_dcfg imx_mu_cfg_imx8ulp = {
@@ -1071,7 +1073,8 @@ static int __maybe_unused imx_mu_suspend_noirq(struct device *dev)
priv->xcr[i] = imx_mu_read(priv, priv->dcfg->xCR[i]);
}

- priv->suspend = true;
+ if (!priv->dcfg->skip_suspend_flag)
+ priv->suspend = true;

return 0;
}
@@ -1094,7 +1097,8 @@ static int __maybe_unused imx_mu_resume_noirq(struct device *dev)
imx_mu_write(priv, priv->xcr[i], priv->dcfg->xCR[i]);
}

- priv->suspend = false;
+ if (!priv->dcfg->skip_suspend_flag)
+ priv->suspend = false;

return 0;
}
diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
index 2b7d0bc920726..1d4191cb91380 100644
--- a/drivers/mailbox/pcc.c
+++ b/drivers/mailbox/pcc.c
@@ -481,7 +481,7 @@ static int pcc_startup(struct mbox_chan *chan)

if (pchan->plat_irq > 0) {
irqflags = pcc_chan_plat_irq_can_be_shared(pchan) ?
- IRQF_SHARED | IRQF_ONESHOT : 0;
+ IRQF_SHARED : 0;
rc = devm_request_irq(chan->mbox->dev, pchan->plat_irq, pcc_mbox_irq,
irqflags, MBOX_IRQ_NAME, chan);
if (unlikely(rc)) {
diff --git a/drivers/mailbox/sprd-mailbox.c b/drivers/mailbox/sprd-mailbox.c
index ee8539dfcef54..46d0c34177ab9 100644
--- a/drivers/mailbox/sprd-mailbox.c
+++ b/drivers/mailbox/sprd-mailbox.c
@@ -166,6 +166,11 @@ static irqreturn_t sprd_mbox_inbox_isr(int irq, void *data)
return IRQ_NONE;
}

+ /* Clear FIFO delivery and overflow status first */
+ writel(fifo_sts &
+ (SPRD_INBOX_FIFO_DELIVER_MASK | SPRD_INBOX_FIFO_OVERLOW_MASK),
+ priv->inbox_base + SPRD_MBOX_FIFO_RST);
+
while (send_sts) {
id = __ffs(send_sts);
send_sts &= (send_sts - 1);
@@ -181,11 +186,6 @@ static irqreturn_t sprd_mbox_inbox_isr(int irq, void *data)
mbox_chan_txdone(chan, 0);
}

- /* Clear FIFO delivery and overflow status */
- writel(fifo_sts &
- (SPRD_INBOX_FIFO_DELIVER_MASK | SPRD_INBOX_FIFO_OVERLOW_MASK),
- priv->inbox_base + SPRD_MBOX_FIFO_RST);
-
/* Clear irq status */
writel(SPRD_MBOX_IRQ_CLR, priv->inbox_base + SPRD_MBOX_IRQ_STS);

@@ -243,21 +243,19 @@ static int sprd_mbox_startup(struct mbox_chan *chan)
/* Select outbox FIFO mode and reset the outbox FIFO status */
writel(0x0, priv->outbox_base + SPRD_MBOX_FIFO_RST);

- /* Enable inbox FIFO overflow and delivery interrupt */
- val = readl(priv->inbox_base + SPRD_MBOX_IRQ_MSK);
- val &= ~(SPRD_INBOX_FIFO_OVERFLOW_IRQ | SPRD_INBOX_FIFO_DELIVER_IRQ);
+ /* Enable inbox FIFO delivery interrupt */
+ val = SPRD_INBOX_FIFO_IRQ_MASK;
+ val &= ~SPRD_INBOX_FIFO_DELIVER_IRQ;
writel(val, priv->inbox_base + SPRD_MBOX_IRQ_MSK);

/* Enable outbox FIFO not empty interrupt */
- val = readl(priv->outbox_base + SPRD_MBOX_IRQ_MSK);
+ val = SPRD_OUTBOX_FIFO_IRQ_MASK;
val &= ~SPRD_OUTBOX_FIFO_NOT_EMPTY_IRQ;
writel(val, priv->outbox_base + SPRD_MBOX_IRQ_MSK);

/* Enable supplementary outbox as the fundamental one */
if (priv->supp_base) {
writel(0x0, priv->supp_base + SPRD_MBOX_FIFO_RST);
- val = readl(priv->supp_base + SPRD_MBOX_IRQ_MSK);
- val &= ~SPRD_OUTBOX_FIFO_NOT_EMPTY_IRQ;
writel(val, priv->supp_base + SPRD_MBOX_IRQ_MSK);
}
}
diff --git a/drivers/md/dm-exception-store.c b/drivers/md/dm-exception-store.c
index c3799757bf4a0..88f119a0a2ae0 100644
--- a/drivers/md/dm-exception-store.c
+++ b/drivers/md/dm-exception-store.c
@@ -116,7 +116,7 @@ int dm_exception_store_type_register(struct dm_exception_store_type *type)
if (!__find_exception_store_type(type->name))
list_add(&type->list, &_exception_store_types);
else
- r = -EEXIST;
+ r = -EBUSY;
spin_unlock(&_lock);

return r;
diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
index 444cf35feebf4..e1dc373ad209d 100644
--- a/drivers/md/dm-integrity.c
+++ b/drivers/md/dm-integrity.c
@@ -2312,7 +2312,7 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map

new_pos = find_journal_node(ic, dio->range.logical_sector, &next_sector);
if (unlikely(new_pos != NOT_FOUND) ||
- unlikely(next_sector < dio->range.logical_sector - dio->range.n_sectors)) {
+ unlikely(next_sector < dio->range.logical_sector + dio->range.n_sectors)) {
remove_range_unlocked(ic, &dio->range);
spin_unlock_irq(&ic->endio_wait.lock);
queue_work(ic->commit_wq, &ic->commit_work);
@@ -3653,14 +3653,27 @@ static void dm_integrity_resume(struct dm_target *ti)
struct dm_integrity_c *ic = ti->private;
__u64 old_provided_data_sectors = le64_to_cpu(ic->sb->provided_data_sectors);
int r;
+ __le32 flags;

DEBUG_print("resume\n");

ic->wrote_to_journal = false;

+ flags = ic->sb->flags & cpu_to_le32(SB_FLAG_RECALCULATING);
+ r = sync_rw_sb(ic, REQ_OP_READ);
+ if (r)
+ dm_integrity_io_error(ic, "reading superblock", r);
+ if ((ic->sb->flags & flags) != flags) {
+ ic->sb->flags |= flags;
+ r = sync_rw_sb(ic, REQ_OP_WRITE | REQ_FUA);
+ if (unlikely(r))
+ dm_integrity_io_error(ic, "writing superblock", r);
+ }
+
if (ic->provided_data_sectors != old_provided_data_sectors) {
if (ic->provided_data_sectors > old_provided_data_sectors &&
ic->mode == 'B' &&
+ ic->sb->flags & cpu_to_le32(SB_FLAG_DIRTY_BITMAP) &&
ic->sb->log2_blocks_per_bitmap_bit == ic->log2_blocks_per_bitmap_bit) {
rw_journal_sectors(ic, REQ_OP_READ, 0,
ic->n_bitmap_blocks * (BITMAP_BLOCK_SIZE >> SECTOR_SHIFT), NULL);
diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
index 9d85d045f9d9d..bced5a783ee33 100644
--- a/drivers/md/dm-log.c
+++ b/drivers/md/dm-log.c
@@ -121,7 +121,7 @@ int dm_dirty_log_type_register(struct dm_dirty_log_type *type)
if (!__find_dirty_log_type(type->name))
list_add(&type->list, &_log_types);
else
- r = -EEXIST;
+ r = -EBUSY;
spin_unlock(&_lock);

return r;
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 637977acc3dc0..6e68a22f0d283 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -220,6 +220,7 @@ static struct multipath *alloc_multipath(struct dm_target *ti)
mutex_init(&m->work_mutex);

m->queue_mode = DM_TYPE_NONE;
+ m->pg_init_delay_msecs = DM_PG_INIT_DELAY_DEFAULT;

m->ti = ti;
ti->private = m;
@@ -252,7 +253,6 @@ static int alloc_multipath_stage2(struct dm_target *ti, struct multipath *m)
set_bit(MPATHF_QUEUE_IO, &m->flags);
atomic_set(&m->pg_init_in_progress, 0);
atomic_set(&m->pg_init_count, 0);
- m->pg_init_delay_msecs = DM_PG_INIT_DELAY_DEFAULT;
init_waitqueue_head(&m->pg_init_wait);

return 0;
diff --git a/drivers/md/dm-path-selector.c b/drivers/md/dm-path-selector.c
index 3e4cb81ce512c..78f98545ca72d 100644
--- a/drivers/md/dm-path-selector.c
+++ b/drivers/md/dm-path-selector.c
@@ -107,7 +107,7 @@ int dm_register_path_selector(struct path_selector_type *pst)

if (__find_path_selector_type(pst->name)) {
kfree(psi);
- r = -EEXIST;
+ r = -EBUSY;
} else
list_add(&psi->list, &_path_selectors);

diff --git a/drivers/md/dm-rq.c b/drivers/md/dm-rq.c
index 499f8cc8a39fb..29d6ee68d4dfb 100644
--- a/drivers/md/dm-rq.c
+++ b/drivers/md/dm-rq.c
@@ -109,14 +109,21 @@ static void end_clone_bio(struct bio *clone)
*/
tio->completed += nr_bytes;

+ if (!is_last)
+ return;
+ /*
+ * At this moment we know this is the last bio of the cloned request,
+ * and all cloned bios have been released, so reset the clone request's
+ * bio pointer to avoid double free.
+ */
+ tio->clone->bio = NULL;
+ exit:
/*
* Update the original request.
* Do not use blk_mq_end_request() here, because it may complete
* the original request before the clone, and break the ordering.
*/
- if (is_last)
- exit:
- blk_update_request(tio->orig, BLK_STS_OK, tio->completed);
+ blk_update_request(tio->orig, BLK_STS_OK, tio->completed);
}

static struct dm_rq_target_io *tio_from_request(struct request *rq)
@@ -278,8 +285,7 @@ static void dm_complete_request(struct request *rq, blk_status_t error)
struct dm_rq_target_io *tio = tio_from_request(rq);

tio->error = error;
- if (likely(!blk_should_fake_timeout(rq->q)))
- blk_mq_complete_request(rq);
+ blk_mq_complete_request(rq);
}

/*
diff --git a/drivers/md/dm-target.c b/drivers/md/dm-target.c
index b676fd01a0227..33f81611bdefd 100644
--- a/drivers/md/dm-target.c
+++ b/drivers/md/dm-target.c
@@ -88,7 +88,7 @@ int dm_register_target(struct target_type *tt)
if (__find_target_type(tt->name)) {
DMERR("%s: '%s' target already registered",
__func__, tt->name);
- rv = -EEXIST;
+ rv = -EBUSY;
} else {
list_add(&tt->list, &_targets);
}
diff --git a/drivers/md/dm-unstripe.c b/drivers/md/dm-unstripe.c
index e8a9432057dce..17be483595642 100644
--- a/drivers/md/dm-unstripe.c
+++ b/drivers/md/dm-unstripe.c
@@ -117,7 +117,7 @@ static void unstripe_dtr(struct dm_target *ti)
static sector_t map_to_core(struct dm_target *ti, struct bio *bio)
{
struct unstripe_c *uc = ti->private;
- sector_t sector = bio->bi_iter.bi_sector;
+ sector_t sector = dm_target_offset(ti, bio->bi_iter.bi_sector);
sector_t tmp_sector = sector;

/* Shift us up to the right "row" on the stripe */
diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
index 7d477ff6f26bb..6f98065d8e5cc 100644
--- a/drivers/md/dm-verity-fec.c
+++ b/drivers/md/dm-verity-fec.c
@@ -546,9 +546,9 @@ void verity_fec_dtr(struct dm_verity *v)
mempool_exit(&f->output_pool);
kmem_cache_destroy(f->cache);

- if (f->data_bufio)
+ if (!IS_ERR_OR_NULL(f->data_bufio))
dm_bufio_client_destroy(f->data_bufio);
- if (f->bufio)
+ if (!IS_ERR_OR_NULL(f->bufio))
dm_bufio_client_destroy(f->bufio);

if (f->dev)
diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c
index 04cc36a9d5ca4..912b9fe1f5648 100644
--- a/drivers/md/dm-zone.c
+++ b/drivers/md/dm-zone.c
@@ -56,7 +56,7 @@ int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
{
struct mapped_device *md = disk->private_data;
struct dm_table *map;
- struct dm_table *zone_revalidate_map = md->zone_revalidate_map;
+ struct dm_table *zone_revalidate_map = READ_ONCE(md->zone_revalidate_map);
int srcu_idx, ret = -EIO;
bool put_table = false;

@@ -66,11 +66,13 @@ int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
* Zone revalidation during __bind() is in progress, but this
* call is from a different process
*/
- if (dm_suspended_md(md))
- return -EAGAIN;
-
map = dm_get_live_table(md, &srcu_idx);
put_table = true;
+
+ if (dm_suspended_md(md)) {
+ ret = -EAGAIN;
+ goto do_put_table;
+ }
} else {
/* Zone revalidation during __bind() */
map = zone_revalidate_map;
@@ -80,6 +82,7 @@ int dm_blk_report_zones(struct gendisk *disk, sector_t sector,
ret = dm_blk_do_report_zones(md, map, sector, nr_zones, cb,
data);

+do_put_table:
if (put_table)
dm_put_live_table(md, srcu_idx);

diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index fd84a126f63fb..ec48fcdb19ed8 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1386,6 +1386,8 @@ void dm_submit_bio_remap(struct bio *clone, struct bio *tgt_clone)
if (!tgt_clone)
tgt_clone = clone;

+ bio_clone_blkg_association(tgt_clone, io->orig_bio);
+
/*
* Account io->origin_bio to DM dev on behalf of target
* that took ownership of IO with DM_MAPIO_SUBMITTED.
diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
index e7e1833ff04b2..21cdf48815f88 100644
--- a/drivers/md/md-bitmap.c
+++ b/drivers/md/md-bitmap.c
@@ -2465,6 +2465,7 @@ static int __bitmap_resize(struct bitmap *bitmap, sector_t blocks,
memcpy(page_address(store.sb_page),
page_address(bitmap->storage.sb_page),
sizeof(bitmap_super_t));
+ mutex_lock(&bitmap->mddev->bitmap_info.mutex);
spin_lock_irq(&bitmap->counts.lock);
md_bitmap_file_unmap(&bitmap->storage);
bitmap->storage = store;
@@ -2572,7 +2573,7 @@ static int __bitmap_resize(struct bitmap *bitmap, sector_t blocks,
set_page_attr(bitmap, i, BITMAP_PAGE_DIRTY);
}
spin_unlock_irq(&bitmap->counts.lock);
-
+ mutex_unlock(&bitmap->mddev->bitmap_info.mutex);
if (!init) {
__bitmap_unplug(bitmap);
bitmap->mddev->pers->quiesce(bitmap->mddev, 0);
diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 6595f89becdb7..bf9ab478df2c0 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -549,8 +549,13 @@ static void process_metadata_update(struct mddev *mddev, struct cluster_msg *msg

dlm_lock_sync(cinfo->no_new_dev_lockres, DLM_LOCK_CR);

- /* daemaon thread must exist */
thread = rcu_dereference_protected(mddev->thread, true);
+ if (!thread) {
+ pr_warn("md-cluster: Received metadata update but MD thread is not ready\n");
+ dlm_unlock_sync(cinfo->no_new_dev_lockres);
+ return;
+ }
+
wait_event(thread->wqueue,
(got_lock = mddev_trylock(mddev)) ||
test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state));
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 4c6b1bd6da9bb..093b04e6be675 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -45,6 +45,7 @@

static void allow_barrier(struct r1conf *conf, sector_t sector_nr);
static void lower_barrier(struct r1conf *conf, sector_t sector_nr);
+static void raid1_free(struct mddev *mddev, void *priv);

#define RAID_1_10_NAME "raid1"
#include "raid1-10.c"
@@ -3245,8 +3246,12 @@ static int raid1_run(struct mddev *mddev)

if (!mddev_is_dm(mddev)) {
ret = raid1_set_limits(mddev);
- if (ret)
+ if (ret) {
+ md_unregister_thread(mddev, &conf->thread);
+ if (!mddev->private)
+ raid1_free(mddev, conf);
return ret;
+ }
}

mddev->degraded = 0;
@@ -3260,6 +3265,8 @@ static int raid1_run(struct mddev *mddev)
*/
if (conf->raid_disks - mddev->degraded < 1) {
md_unregister_thread(mddev, &conf->thread);
+ if (!mddev->private)
+ raid1_free(mddev, conf);
return -EINVAL;
}

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index a91911a9fc036..db07c99c4d947 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -3417,7 +3417,6 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
!test_bit(In_sync, &rdev->flags))
continue;
/* This is where we read from */
- any_working = 1;
sector = r10_bio->devs[j].addr;

if (is_badblock(rdev, sector, max_sync,
@@ -3432,6 +3431,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
continue;
}
}
+ any_working = 1;
bio = r10_bio->devs[0].bio;
bio->bi_next = biolist;
biolist = bio;
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 7262b77a8e022..5079943046743 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -8049,7 +8049,8 @@ static int raid5_run(struct mddev *mddev)
goto abort;
}

- if (log_init(conf, journal_dev, raid5_has_ppl(conf)))
+ ret = log_init(conf, journal_dev, raid5_has_ppl(conf));
+ if (ret)
goto abort;

return 0;
diff --git a/drivers/media/dvb-core/dmxdev.c b/drivers/media/dvb-core/dmxdev.c
index 9ce5f010de3f8..804fb339f7355 100644
--- a/drivers/media/dvb-core/dmxdev.c
+++ b/drivers/media/dvb-core/dmxdev.c
@@ -396,11 +396,11 @@ static int dvb_dmxdev_section_callback(const u8 *buffer1, size_t buffer1_len,
if (dvb_vb2_is_streaming(&dmxdevfilter->vb2_ctx)) {
ret = dvb_vb2_fill_buffer(&dmxdevfilter->vb2_ctx,
buffer1, buffer1_len,
- buffer_flags);
+ buffer_flags, true);
if (ret == buffer1_len)
ret = dvb_vb2_fill_buffer(&dmxdevfilter->vb2_ctx,
buffer2, buffer2_len,
- buffer_flags);
+ buffer_flags, true);
} else {
ret = dvb_dmxdev_buffer_write(&dmxdevfilter->buffer,
buffer1, buffer1_len);
@@ -451,10 +451,10 @@ static int dvb_dmxdev_ts_callback(const u8 *buffer1, size_t buffer1_len,

if (dvb_vb2_is_streaming(ctx)) {
ret = dvb_vb2_fill_buffer(ctx, buffer1, buffer1_len,
- buffer_flags);
+ buffer_flags, false);
if (ret == buffer1_len)
ret = dvb_vb2_fill_buffer(ctx, buffer2, buffer2_len,
- buffer_flags);
+ buffer_flags, false);
} else {
if (buffer->error) {
spin_unlock(&dmxdevfilter->dev->lock);
diff --git a/drivers/media/dvb-core/dvb_vb2.c b/drivers/media/dvb-core/dvb_vb2.c
index 29edaaff7a5c9..7444bbc2f24d9 100644
--- a/drivers/media/dvb-core/dvb_vb2.c
+++ b/drivers/media/dvb-core/dvb_vb2.c
@@ -249,7 +249,8 @@ int dvb_vb2_is_streaming(struct dvb_vb2_ctx *ctx)

int dvb_vb2_fill_buffer(struct dvb_vb2_ctx *ctx,
const unsigned char *src, int len,
- enum dmx_buffer_flags *buffer_flags)
+ enum dmx_buffer_flags *buffer_flags,
+ bool flush)
{
unsigned long flags = 0;
void *vbuf = NULL;
@@ -306,7 +307,7 @@ int dvb_vb2_fill_buffer(struct dvb_vb2_ctx *ctx,
}
}

- if (ctx->nonblocking && ctx->buf) {
+ if (flush && ctx->buf) {
vb2_set_plane_payload(&ctx->buf->vb, 0, ll);
vb2_buffer_done(&ctx->buf->vb, VB2_BUF_STATE_DONE);
list_del(&ctx->buf->list);
diff --git a/drivers/media/i2c/adv7180.c b/drivers/media/i2c/adv7180.c
index f3f47beff76bb..36e0f7a8c6d57 100644
--- a/drivers/media/i2c/adv7180.c
+++ b/drivers/media/i2c/adv7180.c
@@ -480,6 +480,13 @@ static int adv7180_get_frame_interval(struct v4l2_subdev *sd,
fi->interval.denominator = 25;
}

+ /*
+ * If the de-interlacer is active, the chip produces full video frames
+ * at the field rate.
+ */
+ if (state->field == V4L2_FIELD_NONE)
+ fi->interval.denominator *= 2;
+
return 0;
}

diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
index 4b7d8039b1c9f..9ed35f1227efa 100644
--- a/drivers/media/i2c/ccs/ccs-core.c
+++ b/drivers/media/i2c/ccs/ccs-core.c
@@ -2354,7 +2354,7 @@ static void ccs_set_compose_scaler(struct v4l2_subdev *subdev,
* CCS_LIM(sensor, SCALER_N_MIN) / sel->r.height;
max_m = crops[CCS_PAD_SINK]->width
* CCS_LIM(sensor, SCALER_N_MIN)
- / CCS_LIM(sensor, MIN_X_OUTPUT_SIZE);
+ / (CCS_LIM(sensor, MIN_X_OUTPUT_SIZE) ?: 1);

a = clamp(a, CCS_LIM(sensor, SCALER_M_MIN),
CCS_LIM(sensor, SCALER_M_MAX));
@@ -2953,6 +2953,8 @@ static void ccs_cleanup(struct ccs_sensor *sensor)
ccs_free_controls(sensor);
}

+static const struct v4l2_subdev_internal_ops ccs_internal_ops;
+
static int ccs_init_subdev(struct ccs_sensor *sensor,
struct ccs_subdev *ssd, const char *name,
unsigned short num_pads, u32 function,
@@ -2965,8 +2967,10 @@ static int ccs_init_subdev(struct ccs_sensor *sensor,
if (!ssd)
return 0;

- if (ssd != sensor->src)
+ if (ssd != sensor->src) {
v4l2_subdev_init(&ssd->sd, &ccs_ops);
+ ssd->sd.internal_ops = &ccs_internal_ops;
+ }

ssd->sd.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
ssd->sd.entity.function = function;
@@ -3075,6 +3079,10 @@ static const struct media_entity_operations ccs_entity_ops = {
.link_validate = v4l2_subdev_link_validate,
};

+static const struct v4l2_subdev_internal_ops ccs_internal_ops = {
+ .init_state = ccs_init_state,
+};
+
static const struct v4l2_subdev_internal_ops ccs_internal_src_ops = {
.init_state = ccs_init_state,
.registered = ccs_registered,
@@ -3436,7 +3444,21 @@ static int ccs_probe(struct i2c_client *client)
sensor->scale_m = CCS_LIM(sensor, SCALER_N_MIN);

/* prepare PLL configuration input values */
- sensor->pll.bus_type = CCS_PLL_BUS_TYPE_CSI2_DPHY;
+ switch (sensor->hwcfg.csi_signalling_mode) {
+ case CCS_CSI_SIGNALING_MODE_CSI_2_CPHY:
+ sensor->pll.bus_type = CCS_PLL_BUS_TYPE_CSI2_CPHY;
+ break;
+ case CCS_CSI_SIGNALING_MODE_CSI_2_DPHY:
+ case SMIAPP_CSI_SIGNALLING_MODE_CCP2_DATA_CLOCK:
+ case SMIAPP_CSI_SIGNALLING_MODE_CCP2_DATA_STROBE:
+ sensor->pll.bus_type = CCS_PLL_BUS_TYPE_CSI2_DPHY;
+ break;
+ default:
+ dev_err(&client->dev, "unsupported signalling mode %u\n",
+ sensor->hwcfg.csi_signalling_mode);
+ rval = -EINVAL;
+ goto out_cleanup;
+ }
sensor->pll.csi2.lanes = sensor->hwcfg.lanes;
if (CCS_LIM(sensor, CLOCK_CALCULATION) &
CCS_CLOCK_CALCULATION_LANE_SPEED) {
diff --git a/drivers/media/i2c/mt9m114.c b/drivers/media/i2c/mt9m114.c
index c00f9412d08eb..4100f7ffe5e16 100644
--- a/drivers/media/i2c/mt9m114.c
+++ b/drivers/media/i2c/mt9m114.c
@@ -2296,11 +2296,17 @@ static int mt9m114_parse_dt(struct mt9m114 *sensor)
struct fwnode_handle *ep;
int ret;

+ /*
+ * On ACPI systems the fwnode graph can be initialized by a bridge
+ * driver, which may not have probed yet. Wait for this.
+ *
+ * TODO: Return an error once bridge driver code will have moved
+ * to the ACPI core.
+ */
ep = fwnode_graph_get_next_endpoint(fwnode, NULL);
- if (!ep) {
- dev_err(&sensor->client->dev, "No endpoint found\n");
- return -EINVAL;
- }
+ if (!ep)
+ return dev_err_probe(&sensor->client->dev, -EPROBE_DEFER,
+ "waiting for fwnode graph endpoint\n");

sensor->bus_cfg.bus_type = V4L2_MBUS_UNKNOWN;
ret = v4l2_fwnode_endpoint_alloc_parse(ep, &sensor->bus_cfg);
@@ -2359,7 +2365,7 @@ static int mt9m114_probe(struct i2c_client *client)
goto error_ep_free;
}

- sensor->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
+ sensor->reset = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);
if (IS_ERR(sensor->reset)) {
ret = PTR_ERR(sensor->reset);
dev_err_probe(dev, ret, "Failed to get reset GPIO\n");
diff --git a/drivers/media/i2c/ov01a10.c b/drivers/media/i2c/ov01a10.c
index 0b9fb1ddbe597..7404f40a4cc41 100644
--- a/drivers/media/i2c/ov01a10.c
+++ b/drivers/media/i2c/ov01a10.c
@@ -17,7 +17,7 @@
#include <media/v4l2-fwnode.h>

#define OV01A10_LINK_FREQ_400MHZ 400000000ULL
-#define OV01A10_SCLK 40000000LL
+#define OV01A10_SCLK 80000000LL
#define OV01A10_DATA_LANES 1

#define OV01A10_REG_CHIP_ID 0x300a
@@ -49,7 +49,7 @@
/* analog gain controls */
#define OV01A10_REG_ANALOG_GAIN 0x3508
#define OV01A10_ANAL_GAIN_MIN 0x100
-#define OV01A10_ANAL_GAIN_MAX 0xffff
+#define OV01A10_ANAL_GAIN_MAX 0x3fff
#define OV01A10_ANAL_GAIN_STEP 1

/* digital gain controls */
@@ -58,7 +58,7 @@
#define OV01A10_REG_DIGITAL_GAIN_GR 0x3513
#define OV01A10_REG_DIGITAL_GAIN_R 0x3516
#define OV01A10_DGTL_GAIN_MIN 0
-#define OV01A10_DGTL_GAIN_MAX 0x3ffff
+#define OV01A10_DGTL_GAIN_MAX 0x3fff
#define OV01A10_DGTL_GAIN_STEP 1
#define OV01A10_DGTL_GAIN_DEFAULT 1024

@@ -76,6 +76,15 @@
#define OV01A10_REG_X_WIN 0x3811
#define OV01A10_REG_Y_WIN 0x3813

+/*
+ * The native ov01a10 bayer-pattern is GBRG, but there was a driver bug enabling
+ * hflip/mirroring by default resulting in BGGR. Because of this bug Intel's
+ * proprietary IPU6 userspace stack expects BGGR. So we report BGGR to not break
+ * userspace and fix things up by shifting the crop window-x coordinate by 1
+ * when hflip is *disabled*.
+ */
+#define OV01A10_MEDIA_BUS_FMT MEDIA_BUS_FMT_SBGGR10_1X10
+
struct ov01a10_reg {
u16 address;
u8 val;
@@ -186,14 +195,14 @@ static const struct ov01a10_reg sensor_1280x800_setting[] = {
{0x380e, 0x03},
{0x380f, 0x80},
{0x3810, 0x00},
- {0x3811, 0x08},
+ {0x3811, 0x09},
{0x3812, 0x00},
{0x3813, 0x08},
{0x3814, 0x01},
{0x3815, 0x01},
{0x3816, 0x01},
{0x3817, 0x01},
- {0x3820, 0xa0},
+ {0x3820, 0xa8},
{0x3822, 0x13},
{0x3832, 0x28},
{0x3833, 0x10},
@@ -241,9 +250,8 @@ static const struct ov01a10_reg sensor_1280x800_setting[] = {
static const char * const ov01a10_test_pattern_menu[] = {
"Disabled",
"Color Bar",
- "Top-Bottom Darker Color Bar",
- "Right-Left Darker Color Bar",
- "Color Bar type 4",
+ "Left-Right Darker Color Bar",
+ "Bottom-Top Darker Color Bar",
};

static const s64 link_freq_menu_items[] = {
@@ -398,10 +406,8 @@ static int ov01a10_update_digital_gain(struct ov01a10 *ov01a10, u32 d_gain)

static int ov01a10_test_pattern(struct ov01a10 *ov01a10, u32 pattern)
{
- if (!pattern)
- return 0;
-
- pattern = (pattern - 1) | OV01A10_TEST_PATTERN_ENABLE;
+ if (pattern)
+ pattern |= OV01A10_TEST_PATTERN_ENABLE;

return ov01a10_write_reg(ov01a10, OV01A10_REG_TEST_PATTERN, 1, pattern);
}
@@ -412,7 +418,7 @@ static int ov01a10_set_hflip(struct ov01a10 *ov01a10, u32 hflip)
int ret;
u32 val, offset;

- offset = hflip ? 0x9 : 0x8;
+ offset = hflip ? 0x8 : 0x9;
ret = ov01a10_write_reg(ov01a10, OV01A10_REG_X_WIN, 1, offset);
if (ret)
return ret;
@@ -421,8 +427,8 @@ static int ov01a10_set_hflip(struct ov01a10 *ov01a10, u32 hflip)
if (ret)
return ret;

- val = hflip ? val | FIELD_PREP(OV01A10_HFLIP_MASK, 0x1) :
- val & ~OV01A10_HFLIP_MASK;
+ val = hflip ? val & ~OV01A10_HFLIP_MASK :
+ val | FIELD_PREP(OV01A10_HFLIP_MASK, 0x1);

return ov01a10_write_reg(ov01a10, OV01A10_REG_FORMAT1, 1, val);
}
@@ -611,7 +617,7 @@ static void ov01a10_update_pad_format(const struct ov01a10_mode *mode,
{
fmt->width = mode->width;
fmt->height = mode->height;
- fmt->code = MEDIA_BUS_FMT_SBGGR10_1X10;
+ fmt->code = OV01A10_MEDIA_BUS_FMT;
fmt->field = V4L2_FIELD_NONE;
fmt->colorspace = V4L2_COLORSPACE_RAW;
}
@@ -723,7 +729,7 @@ static int ov01a10_set_format(struct v4l2_subdev *sd,
h_blank);
}

- format = v4l2_subdev_state_get_format(sd_state, fmt->stream);
+ format = v4l2_subdev_state_get_format(sd_state, fmt->pad);
*format = fmt->format;

return 0;
@@ -752,7 +758,7 @@ static int ov01a10_enum_mbus_code(struct v4l2_subdev *sd,
if (code->index > 0)
return -EINVAL;

- code->code = MEDIA_BUS_FMT_SBGGR10_1X10;
+ code->code = OV01A10_MEDIA_BUS_FMT;

return 0;
}
@@ -762,7 +768,7 @@ static int ov01a10_enum_frame_size(struct v4l2_subdev *sd,
struct v4l2_subdev_frame_size_enum *fse)
{
if (fse->index >= ARRAY_SIZE(supported_modes) ||
- fse->code != MEDIA_BUS_FMT_SBGGR10_1X10)
+ fse->code != OV01A10_MEDIA_BUS_FMT)
return -EINVAL;

fse->min_width = supported_modes[fse->index].width;
@@ -858,6 +864,7 @@ static void ov01a10_remove(struct i2c_client *client)
struct v4l2_subdev *sd = i2c_get_clientdata(client);

v4l2_async_unregister_subdev(sd);
+ v4l2_subdev_cleanup(sd);
media_entity_cleanup(&sd->entity);
v4l2_ctrl_handler_free(sd->ctrl_handler);

@@ -929,6 +936,7 @@ static int ov01a10_probe(struct i2c_client *client)
err_pm_disable:
pm_runtime_disable(dev);
pm_runtime_set_suspended(&client->dev);
+ v4l2_subdev_cleanup(&ov01a10->sd);

err_media_entity_cleanup:
media_entity_cleanup(&ov01a10->sd.entity);
diff --git a/drivers/media/i2c/ov5647.c b/drivers/media/i2c/ov5647.c
index a727beb9d57e2..305677913a0c3 100644
--- a/drivers/media/i2c/ov5647.c
+++ b/drivers/media/i2c/ov5647.c
@@ -69,11 +69,11 @@
#define OV5647_NATIVE_HEIGHT 1956U

#define OV5647_PIXEL_ARRAY_LEFT 16U
-#define OV5647_PIXEL_ARRAY_TOP 16U
+#define OV5647_PIXEL_ARRAY_TOP 6U
#define OV5647_PIXEL_ARRAY_WIDTH 2592U
#define OV5647_PIXEL_ARRAY_HEIGHT 1944U

-#define OV5647_VBLANK_MIN 4
+#define OV5647_VBLANK_MIN 24
#define OV5647_VTS_MAX 32767

#define OV5647_EXPOSURE_MIN 4
@@ -508,7 +508,7 @@ static const struct ov5647_mode ov5647_modes[] = {
{
.format = {
.code = MEDIA_BUS_FMT_SBGGR10_1X10,
- .colorspace = V4L2_COLORSPACE_SRGB,
+ .colorspace = V4L2_COLORSPACE_RAW,
.field = V4L2_FIELD_NONE,
.width = 2592,
.height = 1944
@@ -529,7 +529,7 @@ static const struct ov5647_mode ov5647_modes[] = {
{
.format = {
.code = MEDIA_BUS_FMT_SBGGR10_1X10,
- .colorspace = V4L2_COLORSPACE_SRGB,
+ .colorspace = V4L2_COLORSPACE_RAW,
.field = V4L2_FIELD_NONE,
.width = 1920,
.height = 1080
@@ -550,7 +550,7 @@ static const struct ov5647_mode ov5647_modes[] = {
{
.format = {
.code = MEDIA_BUS_FMT_SBGGR10_1X10,
- .colorspace = V4L2_COLORSPACE_SRGB,
+ .colorspace = V4L2_COLORSPACE_RAW,
.field = V4L2_FIELD_NONE,
.width = 1296,
.height = 972
@@ -571,7 +571,7 @@ static const struct ov5647_mode ov5647_modes[] = {
{
.format = {
.code = MEDIA_BUS_FMT_SBGGR10_1X10,
- .colorspace = V4L2_COLORSPACE_SRGB,
+ .colorspace = V4L2_COLORSPACE_RAW,
.field = V4L2_FIELD_NONE,
.width = 640,
.height = 480
@@ -582,7 +582,7 @@ static const struct ov5647_mode ov5647_modes[] = {
.width = 2560,
.height = 1920,
},
- .pixel_rate = 55000000,
+ .pixel_rate = 58333000,
.hts = 1852,
.vts = 0x1f8,
.reg_list = ov5647_640x480_10bpp,
@@ -1291,6 +1291,8 @@ static int ov5647_init_controls(struct ov5647 *sensor)

v4l2_ctrl_handler_init(&sensor->ctrls, 9);

+ sensor->ctrls.lock = &sensor->lock;
+
v4l2_ctrl_new_std(&sensor->ctrls, &ov5647_ctrl_ops,
V4L2_CID_AUTOGAIN, 0, 1, 1, 0);

@@ -1421,15 +1423,15 @@ static int ov5647_probe(struct i2c_client *client)

sensor->mode = OV5647_DEFAULT_MODE;

- ret = ov5647_init_controls(sensor);
- if (ret)
- goto mutex_destroy;
-
sd = &sensor->sd;
v4l2_i2c_subdev_init(sd, client, &ov5647_subdev_ops);
sd->internal_ops = &ov5647_subdev_internal_ops;
sd->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;

+ ret = ov5647_init_controls(sensor);
+ if (ret)
+ goto mutex_destroy;
+
sensor->pad.flags = MEDIA_PAD_FL_SOURCE;
sd->entity.function = MEDIA_ENT_F_CAM_SENSOR;
ret = media_entity_pads_init(&sd->entity, 1, &sensor->pad);
diff --git a/drivers/media/i2c/tw9903.c b/drivers/media/i2c/tw9903.c
index b996a05e56f28..c3eafd5d5dc82 100644
--- a/drivers/media/i2c/tw9903.c
+++ b/drivers/media/i2c/tw9903.c
@@ -228,6 +228,7 @@ static int tw9903_probe(struct i2c_client *client)

if (write_regs(sd, initial_registers) < 0) {
v4l2_err(client, "error initializing TW9903\n");
+ v4l2_ctrl_handler_free(hdl);
return -EINVAL;
}

diff --git a/drivers/media/i2c/tw9906.c b/drivers/media/i2c/tw9906.c
index 6220f4fddbabc..0ab43fe42d7f4 100644
--- a/drivers/media/i2c/tw9906.c
+++ b/drivers/media/i2c/tw9906.c
@@ -196,6 +196,7 @@ static int tw9906_probe(struct i2c_client *client)

if (write_regs(sd, initial_registers) < 0) {
v4l2_err(client, "error initializing TW9906\n");
+ v4l2_ctrl_handler_free(hdl);
return -EINVAL;
}

diff --git a/drivers/media/pci/cx23885/cx23885-alsa.c b/drivers/media/pci/cx23885/cx23885-alsa.c
index 25dc8d4dc5b73..717fc6c9ef21f 100644
--- a/drivers/media/pci/cx23885/cx23885-alsa.c
+++ b/drivers/media/pci/cx23885/cx23885-alsa.c
@@ -392,8 +392,10 @@ static int snd_cx23885_hw_params(struct snd_pcm_substream *substream,

ret = cx23885_risc_databuffer(chip->pci, &buf->risc, buf->sglist,
chip->period_size, chip->num_periods, 1);
- if (ret < 0)
+ if (ret < 0) {
+ cx23885_alsa_dma_unmap(chip);
goto error;
+ }

/* Loop back to start of program */
buf->risc.jmp[0] = cpu_to_le32(RISC_JUMP|RISC_IRQ1|RISC_CNT_INC);
diff --git a/drivers/media/pci/cx25821/cx25821-alsa.c b/drivers/media/pci/cx25821/cx25821-alsa.c
index a42f0c03a7ca8..f463365163b7e 100644
--- a/drivers/media/pci/cx25821/cx25821-alsa.c
+++ b/drivers/media/pci/cx25821/cx25821-alsa.c
@@ -535,6 +535,7 @@ static int snd_cx25821_hw_params(struct snd_pcm_substream *substream,
chip->period_size, chip->num_periods, 1);
if (ret < 0) {
pr_info("DEBUG: ERROR after cx25821_risc_databuffer_audio()\n");
+ cx25821_alsa_dma_unmap(chip);
goto error;
}

diff --git a/drivers/media/pci/cx25821/cx25821-core.c b/drivers/media/pci/cx25821/cx25821-core.c
index 6627fa9166d30..a7336be444748 100644
--- a/drivers/media/pci/cx25821/cx25821-core.c
+++ b/drivers/media/pci/cx25821/cx25821-core.c
@@ -908,6 +908,7 @@ static int cx25821_dev_setup(struct cx25821_dev *dev)

if (!dev->lmmio) {
CX25821_ERR("ioremap failed, maybe increasing __VMALLOC_RESERVE in page.h\n");
+ release_mem_region(dev->base_io_addr, pci_resource_len(dev->pci, 0));
cx25821_iounmap(dev);
return -ENOMEM;
}
diff --git a/drivers/media/pci/cx88/cx88-alsa.c b/drivers/media/pci/cx88/cx88-alsa.c
index 29fb1311e4434..4e574d8390b4d 100644
--- a/drivers/media/pci/cx88/cx88-alsa.c
+++ b/drivers/media/pci/cx88/cx88-alsa.c
@@ -483,8 +483,10 @@ static int snd_cx88_hw_params(struct snd_pcm_substream *substream,

ret = cx88_risc_databuffer(chip->pci, &buf->risc, buf->sglist,
chip->period_size, chip->num_periods, 1);
- if (ret < 0)
+ if (ret < 0) {
+ cx88_alsa_dma_unmap(chip);
goto error;
+ }

/* Loop back to start of program */
buf->risc.jmp[0] = cpu_to_le32(RISC_JUMP | RISC_IRQ1 | RISC_CNT_INC);
diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
index bbb66b56ee88c..c2acf70af347e 100644
--- a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
+++ b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
@@ -3,6 +3,7 @@
* Copyright (C) 2013--2024 Intel Corporation
*/
#include <linux/atomic.h>
+#include <linux/cleanup.h>
#include <linux/bug.h>
#include <linux/device.h>
#include <linux/list.h>
@@ -201,6 +202,8 @@ static int buffer_list_get(struct ipu6_isys_stream *stream,
unsigned long flags;
unsigned long buf_flag = IPU6_ISYS_BUFFER_LIST_FL_INCOMING;

+ lockdep_assert_held(&stream->mutex);
+
bl->nbufs = 0;
INIT_LIST_HEAD(&bl->head);

@@ -294,9 +297,8 @@ static int ipu6_isys_stream_start(struct ipu6_isys_video *av,
struct ipu6_isys_buffer_list __bl;
int ret;

- mutex_lock(&stream->isys->stream_mutex);
+ guard(mutex)(&stream->isys->stream_mutex);
ret = ipu6_isys_video_set_streaming(av, 1, bl);
- mutex_unlock(&stream->isys->stream_mutex);
if (ret)
goto out_requeue;

@@ -637,10 +639,10 @@ static void stop_streaming(struct vb2_queue *q)
mutex_lock(&av->isys->stream_mutex);
if (stream->nr_streaming == stream->nr_queues && stream->streaming)
ipu6_isys_video_set_streaming(av, 0, NULL);
+ list_del(&aq->node);
mutex_unlock(&av->isys->stream_mutex);

stream->nr_streaming--;
- list_del(&aq->node);
stream->streaming = 0;
mutex_unlock(&stream->mutex);

diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-video.c b/drivers/media/pci/intel/ipu6/ipu6-isys-video.c
index 48388c0c851ca..f25ea042598b0 100644
--- a/drivers/media/pci/intel/ipu6/ipu6-isys-video.c
+++ b/drivers/media/pci/intel/ipu6/ipu6-isys-video.c
@@ -1023,11 +1023,10 @@ int ipu6_isys_video_set_streaming(struct ipu6_isys_video *av, int state,
sd->name, r_pad->index, stream_mask);
ret = v4l2_subdev_disable_streams(sd, r_pad->index,
stream_mask);
- if (ret) {
+ if (ret)
dev_err(dev, "stream off %s failed with %d\n", sd->name,
ret);
- return ret;
- }
+
close_streaming_firmware(av);
} else {
ret = start_stream_firmware(av, bl);
@@ -1053,6 +1052,7 @@ int ipu6_isys_video_set_streaming(struct ipu6_isys_video *av, int state,

out_media_entity_stop_streaming_firmware:
stop_streaming_firmware(av);
+ close_streaming_firmware(av);

return ret;
}
diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c
index 57298ac73d072..4a46f21f469f0 100644
--- a/drivers/media/pci/intel/ipu6/ipu6-mmu.c
+++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c
@@ -102,7 +102,7 @@ static void page_table_dump(struct ipu6_mmu_info *mmu_info)
if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval)
continue;

- l2_phys = TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx];)
+ l2_phys = TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx]);
dev_dbg(mmu_info->dev,
"l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %pap\n",
l1_idx, iova, iova + ISP_PAGE_SIZE, &l2_phys);
@@ -248,7 +248,7 @@ static u32 *alloc_l2_pt(struct ipu6_mmu_info *mmu_info)

dev_dbg(mmu_info->dev, "alloc_l2: get_zeroed_page() = %p\n", pt);

- for (i = 0; i < ISP_L1PT_PTES; i++)
+ for (i = 0; i < ISP_L2PT_PTES; i++)
pt[i] = mmu_info->dummy_page_pteval;

return pt;
diff --git a/drivers/media/pci/intel/ipu6/ipu6.c b/drivers/media/pci/intel/ipu6/ipu6.c
index 1c5c38a30e629..5352219c019c9 100644
--- a/drivers/media/pci/intel/ipu6/ipu6.c
+++ b/drivers/media/pci/intel/ipu6/ipu6.c
@@ -629,21 +629,21 @@ static int ipu6_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (ret) {
dev_err_probe(&isp->pdev->dev, ret,
"Failed to set MMU hardware\n");
- goto out_ipu6_bus_del_devices;
+ goto out_ipu6_rpm_put;
}

ret = ipu6_buttress_map_fw_image(isp->psys, isp->cpd_fw,
&isp->psys->fw_sgt);
if (ret) {
dev_err_probe(&isp->pdev->dev, ret, "failed to map fw image\n");
- goto out_ipu6_bus_del_devices;
+ goto out_ipu6_rpm_put;
}

ret = ipu6_cpd_create_pkg_dir(isp->psys, isp->cpd_fw->data);
if (ret) {
dev_err_probe(&isp->pdev->dev, ret,
"failed to create pkg dir\n");
- goto out_ipu6_bus_del_devices;
+ goto out_ipu6_rpm_put;
}

ret = devm_request_threaded_irq(dev, pdev->irq, ipu6_buttress_isr,
@@ -651,7 +651,7 @@ static int ipu6_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
IRQF_SHARED, IPU6_NAME, isp);
if (ret) {
dev_err_probe(dev, ret, "Requesting irq failed\n");
- goto out_ipu6_bus_del_devices;
+ goto out_ipu6_rpm_put;
}

ret = ipu6_buttress_authenticate(isp);
@@ -682,6 +682,8 @@ static int ipu6_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)

out_free_irq:
devm_free_irq(dev, pdev->irq, isp);
+out_ipu6_rpm_put:
+ pm_runtime_put_sync(&isp->psys->auxdev.dev);
out_ipu6_bus_del_devices:
if (isp->psys) {
ipu6_cpd_free_pkg_dir(isp->psys);
diff --git a/drivers/media/pci/mgb4/mgb4_trigger.c b/drivers/media/pci/mgb4/mgb4_trigger.c
index d7dddc5c8728e..10c23f0c833d5 100644
--- a/drivers/media/pci/mgb4/mgb4_trigger.c
+++ b/drivers/media/pci/mgb4/mgb4_trigger.c
@@ -114,7 +114,7 @@ static int probe_trigger(struct iio_dev *indio_dev, int irq)
if (!st->trig)
return -ENOMEM;

- ret = request_irq(irq, &iio_trigger_generic_data_rdy_poll, 0,
+ ret = request_irq(irq, &iio_trigger_generic_data_rdy_poll, IRQF_NO_THREAD,
"mgb4-trigger", st->trig);
if (ret)
goto error_free_trig;
diff --git a/drivers/media/pci/solo6x10/solo6x10-tw28.c b/drivers/media/pci/solo6x10/solo6x10-tw28.c
index 1b7c22a9bc94f..8f53946c67928 100644
--- a/drivers/media/pci/solo6x10/solo6x10-tw28.c
+++ b/drivers/media/pci/solo6x10/solo6x10-tw28.c
@@ -166,7 +166,7 @@ static const u8 tbl_tw2865_pal_template[] = {
0x64, 0x51, 0x40, 0xaf, 0xFF, 0xF0, 0x00, 0xC0,
};

-#define is_tw286x(__solo, __id) (!(__solo->tw2815 & (1 << __id)))
+#define is_tw286x(__solo, __id) (!((__solo)->tw2815 & (1U << (__id))))

static u8 tw_readbyte(struct solo_dev *solo_dev, int chip_id, u8 tw6x_off,
u8 tw_off)
@@ -686,6 +686,9 @@ int tw28_set_ctrl_val(struct solo_dev *solo_dev, u32 ctrl, u8 ch,
chip_num = ch / 4;
ch %= 4;

+ if (chip_num >= TW_NUM_CHIP)
+ return -EINVAL;
+
if (val > 255 || val < 0)
return -ERANGE;

@@ -758,6 +761,9 @@ int tw28_get_ctrl_val(struct solo_dev *solo_dev, u32 ctrl, u8 ch,
chip_num = ch / 4;
ch %= 4;

+ if (chip_num >= TW_NUM_CHIP)
+ return -EINVAL;
+
switch (ctrl) {
case V4L2_CID_SHARPNESS:
/* Only 286x has sharpness */
diff --git a/drivers/media/platform/amphion/vdec.c b/drivers/media/platform/amphion/vdec.c
index 6a38a0fa0e2d4..86abc1ca202c6 100644
--- a/drivers/media/platform/amphion/vdec.c
+++ b/drivers/media/platform/amphion/vdec.c
@@ -630,6 +630,7 @@ static int vdec_decoder_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd
switch (cmd->cmd) {
case V4L2_DEC_CMD_START:
vdec_cmd_start(inst);
+ vb2_clear_last_buffer_dequeued(v4l2_m2m_get_dst_vq(inst->fh.m2m_ctx));
break;
case V4L2_DEC_CMD_STOP:
vdec_cmd_stop(inst);
diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
index 87f5b1d892ca1..72e23f95d6b72 100644
--- a/drivers/media/platform/amphion/vpu_v4l2.c
+++ b/drivers/media/platform/amphion/vpu_v4l2.c
@@ -661,7 +661,6 @@ static int vpu_m2m_queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_q
src_vq->mem_ops = &vb2_vmalloc_memops;
src_vq->drv_priv = inst;
src_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer);
- src_vq->min_queued_buffers = 1;
src_vq->dev = inst->vpu->dev;
src_vq->lock = &inst->lock;
ret = vb2_queue_init(src_vq);
@@ -678,7 +677,6 @@ static int vpu_m2m_queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_q
dst_vq->mem_ops = &vb2_vmalloc_memops;
dst_vq->drv_priv = inst;
dst_vq->buf_struct_size = sizeof(struct vpu_vb2_buffer);
- dst_vq->min_queued_buffers = 1;
dst_vq->dev = inst->vpu->dev;
dst_vq->lock = &inst->lock;
ret = vb2_queue_init(dst_vq);
diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c b/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
index e238447c88bbf..8f7154932d24c 100644
--- a/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
+++ b/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
@@ -1835,8 +1835,10 @@ static int wave5_vpu_open_dec(struct file *filp)
spin_lock_init(&inst->state_spinlock);

inst->codec_info = kzalloc(sizeof(*inst->codec_info), GFP_KERNEL);
- if (!inst->codec_info)
+ if (!inst->codec_info) {
+ kfree(inst);
return -ENOMEM;
+ }

v4l2_fh_init(&inst->v4l2_fh, vdev);
filp->private_data = &inst->v4l2_fh;
diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpu-enc.c b/drivers/media/platform/chips-media/wave5/wave5-vpu-enc.c
index 3e35a05c2d8df..dd34946873ace 100644
--- a/drivers/media/platform/chips-media/wave5/wave5-vpu-enc.c
+++ b/drivers/media/platform/chips-media/wave5/wave5-vpu-enc.c
@@ -640,6 +640,8 @@ static int wave5_vpu_enc_encoder_cmd(struct file *file, void *fh, struct v4l2_en

m2m_ctx->last_src_buf = v4l2_m2m_last_src_buf(m2m_ctx);
m2m_ctx->is_draining = true;
+
+ v4l2_m2m_try_schedule(m2m_ctx);
break;
case V4L2_ENC_CMD_START:
break;
@@ -1336,7 +1338,8 @@ static int wave5_vpu_enc_start_streaming(struct vb2_queue *q, unsigned int count
if (ret)
goto return_buffers;
}
- if (inst->state == VPU_INST_STATE_OPEN && m2m_ctx->cap_q_ctx.q.streaming) {
+ if (inst->state == VPU_INST_STATE_OPEN &&
+ (m2m_ctx->cap_q_ctx.q.streaming || q->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE)) {
ret = initialize_sequence(inst);
if (ret) {
dev_warn(inst->dev->dev, "Sequence not found: %d\n", ret);
@@ -1546,8 +1549,10 @@ static int wave5_vpu_open_enc(struct file *filp)
inst->ops = &wave5_vpu_enc_inst_ops;

inst->codec_info = kzalloc(sizeof(*inst->codec_info), GFP_KERNEL);
- if (!inst->codec_info)
+ if (!inst->codec_info) {
+ kfree(inst);
return -ENOMEM;
+ }

v4l2_fh_init(&inst->v4l2_fh, vdev);
filp->private_data = &inst->v4l2_fh;
diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpu.c b/drivers/media/platform/chips-media/wave5/wave5-vpu.c
index b13c5cd46d7e4..4f7eda0504abe 100644
--- a/drivers/media/platform/chips-media/wave5/wave5-vpu.c
+++ b/drivers/media/platform/chips-media/wave5/wave5-vpu.c
@@ -305,18 +305,20 @@ static void wave5_vpu_remove(struct platform_device *pdev)
{
struct vpu_device *dev = dev_get_drvdata(&pdev->dev);

+ wave5_vpu_enc_unregister_device(dev);
+ wave5_vpu_dec_unregister_device(dev);
+ v4l2_device_unregister(&dev->v4l2_dev);
+
if (dev->irq < 0) {
- kthread_destroy_worker(dev->worker);
hrtimer_cancel(&dev->hrtimer);
+ kthread_cancel_work_sync(&dev->work);
+ kthread_destroy_worker(dev->worker);
}

mutex_destroy(&dev->dev_lock);
mutex_destroy(&dev->hw_lock);
reset_control_assert(dev->resets);
clk_bulk_disable_unprepare(dev->num_clks, dev->clks);
- wave5_vpu_enc_unregister_device(dev);
- wave5_vpu_dec_unregister_device(dev);
- v4l2_device_unregister(&dev->v4l2_dev);
wave5_vdi_release(&pdev->dev);
ida_destroy(&dev->inst_ida);
}
diff --git a/drivers/media/platform/mediatek/mdp/mtk_mdp_core.c b/drivers/media/platform/mediatek/mdp/mtk_mdp_core.c
index 917cdf38f230e..98540015b1cca 100644
--- a/drivers/media/platform/mediatek/mdp/mtk_mdp_core.c
+++ b/drivers/media/platform/mediatek/mdp/mtk_mdp_core.c
@@ -194,11 +194,17 @@ static int mtk_mdp_probe(struct platform_device *pdev)
}

mdp->vpu_dev = vpu_get_plat_device(pdev);
+ if (!mdp->vpu_dev) {
+ dev_err(&pdev->dev, "Failed to get vpu device\n");
+ ret = -ENODEV;
+ goto err_vpu_get_dev;
+ }
+
ret = vpu_wdt_reg_handler(mdp->vpu_dev, mtk_mdp_reset_handler, mdp,
VPU_RST_MDP);
if (ret) {
dev_err(&pdev->dev, "Failed to register reset handler\n");
- goto err_m2m_register;
+ goto err_reg_handler;
}

platform_set_drvdata(pdev, mdp);
@@ -206,7 +212,7 @@ static int mtk_mdp_probe(struct platform_device *pdev)
ret = vb2_dma_contig_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32));
if (ret) {
dev_err(&pdev->dev, "Failed to set vb2 dma mag seg size\n");
- goto err_m2m_register;
+ goto err_reg_handler;
}

pm_runtime_enable(dev);
@@ -214,6 +220,12 @@ static int mtk_mdp_probe(struct platform_device *pdev)

return 0;

+err_reg_handler:
+ platform_device_put(mdp->vpu_dev);
+
+err_vpu_get_dev:
+ mtk_mdp_unregister_m2m_device(mdp);
+
err_m2m_register:
v4l2_device_unregister(&mdp->v4l2_dev);

@@ -242,6 +254,7 @@ static void mtk_mdp_remove(struct platform_device *pdev)

pm_runtime_disable(&pdev->dev);
vb2_dma_contig_clear_max_seg_size(&pdev->dev);
+ platform_device_put(mdp->vpu_dev);
mtk_mdp_unregister_m2m_device(mdp);
v4l2_device_unregister(&mdp->v4l2_dev);

diff --git a/drivers/media/platform/mediatek/vcodec/decoder/mtk_vcodec_dec_stateless.c b/drivers/media/platform/mediatek/vcodec/decoder/mtk_vcodec_dec_stateless.c
index 3307dc15fc1df..f5aa0b06f9e70 100644
--- a/drivers/media/platform/mediatek/vcodec/decoder/mtk_vcodec_dec_stateless.c
+++ b/drivers/media/platform/mediatek/vcodec/decoder/mtk_vcodec_dec_stateless.c
@@ -504,6 +504,12 @@ static int mtk_vdec_s_ctrl(struct v4l2_ctrl *ctrl)
mtk_v4l2_vdec_err(ctx, "VP9: bit_depth:%d", frame->bit_depth);
return -EINVAL;
}
+
+ if (!(frame->flags & V4L2_VP9_FRAME_FLAG_X_SUBSAMPLING) ||
+ !(frame->flags & V4L2_VP9_FRAME_FLAG_Y_SUBSAMPLING)) {
+ mtk_v4l2_vdec_err(ctx, "VP9: only 420 subsampling is supported");
+ return -EINVAL;
+ }
break;
case V4L2_CID_STATELESS_AV1_SEQUENCE:
seq = (struct v4l2_ctrl_av1_sequence *)hdr_ctrl->p_new.p;
diff --git a/drivers/media/platform/mediatek/vcodec/encoder/mtk_vcodec_enc.c b/drivers/media/platform/mediatek/vcodec/encoder/mtk_vcodec_enc.c
index 7eaf0e24c9fc4..3cbc4beb2c40f 100644
--- a/drivers/media/platform/mediatek/vcodec/encoder/mtk_vcodec_enc.c
+++ b/drivers/media/platform/mediatek/vcodec/encoder/mtk_vcodec_enc.c
@@ -865,7 +865,7 @@ static void vb2ops_venc_buf_queue(struct vb2_buffer *vb)
static int vb2ops_venc_start_streaming(struct vb2_queue *q, unsigned int count)
{
struct mtk_vcodec_enc_ctx *ctx = vb2_get_drv_priv(q);
- struct venc_enc_param param;
+ struct venc_enc_param param = { };
int ret;
int i;

@@ -1021,7 +1021,7 @@ static int mtk_venc_encode_header(void *priv)
int ret;
struct vb2_v4l2_buffer *src_buf, *dst_buf;
struct mtk_vcodec_mem bs_buf;
- struct venc_done_result enc_result;
+ struct venc_done_result enc_result = { };

dst_buf = v4l2_m2m_dst_buf_remove(ctx->m2m_ctx);
if (!dst_buf) {
@@ -1142,7 +1142,7 @@ static void mtk_venc_worker(struct work_struct *work)
struct vb2_v4l2_buffer *src_buf, *dst_buf;
struct venc_frm_buf frm_buf;
struct mtk_vcodec_mem bs_buf;
- struct venc_done_result enc_result;
+ struct venc_done_result enc_result = { };
int ret, i;

/* check dst_buf, dst_buf may be removed in device_run
diff --git a/drivers/media/platform/qcom/camss/camss-vfe-480.c b/drivers/media/platform/qcom/camss/camss-vfe-480.c
index dc2735476c823..003e1900d3c98 100644
--- a/drivers/media/platform/qcom/camss/camss-vfe-480.c
+++ b/drivers/media/platform/qcom/camss/camss-vfe-480.c
@@ -220,11 +220,13 @@ static irqreturn_t vfe_isr(int irq, void *dev)
writel_relaxed(status, vfe->base + VFE_BUS_IRQ_CLEAR(0));
writel_relaxed(1, vfe->base + VFE_BUS_IRQ_CLEAR_GLOBAL);

- /* Loop through all WMs IRQs */
- for (i = 0; i < MSM_VFE_IMAGE_MASTERS_NUM; i++) {
+ for (i = 0; i < MAX_VFE_OUTPUT_LINES; i++) {
if (status & BUS_IRQ_MASK_0_RDI_RUP(vfe, i))
vfe_isr_reg_update(vfe, i);
+ }

+ /* Loop through all WMs IRQs */
+ for (i = 0; i < MSM_VFE_IMAGE_MASTERS_NUM; i++) {
if (status & BUS_IRQ_MASK_0_COMP_DONE(vfe, RDI_COMP_GROUP(i)))
vfe_isr_wm_done(vfe, i);
}
diff --git a/drivers/media/platform/qcom/venus/vdec.c b/drivers/media/platform/qcom/venus/vdec.c
index 6846973a11594..bc7325a2f197e 100644
--- a/drivers/media/platform/qcom/venus/vdec.c
+++ b/drivers/media/platform/qcom/venus/vdec.c
@@ -568,7 +568,13 @@ vdec_decoder_cmd(struct file *file, void *fh, struct v4l2_decoder_cmd *cmd)

fdata.buffer_type = HFI_BUFFER_INPUT;
fdata.flags |= HFI_BUFFERFLAG_EOS;
- if (IS_V6(inst->core) && is_fw_rev_or_older(inst->core, 1, 0, 87))
+
+ /* Send NULL EOS addr for only IRIS2 (SM8250),for firmware <= 1.0.87.
+ * SC7280 also reports "1.0.<hash>" parsed as 1.0.0; restricting to IRIS2
+ * avoids misapplying this quirk and breaking VP9 decode on SC7280.
+ */
+
+ if (IS_IRIS2(inst->core) && is_fw_rev_or_older(inst->core, 1, 0, 87))
fdata.device_addr = 0;
else
fdata.device_addr = 0xdeadb000;
@@ -1433,10 +1439,10 @@ static void vdec_buf_done(struct venus_inst *inst, unsigned int buf_type,
inst->drain_active = false;
inst->codec_state = VENUS_DEC_STATE_STOPPED;
}
+ } else {
+ if (!bytesused)
+ state = VB2_BUF_STATE_ERROR;
}
-
- if (!bytesused)
- state = VB2_BUF_STATE_ERROR;
} else {
vbuf->sequence = inst->sequence_out++;
}
diff --git a/drivers/media/platform/rockchip/rga/rga-buf.c b/drivers/media/platform/rockchip/rga/rga-buf.c
index 70808049d2e81..ddc244ffc68c1 100644
--- a/drivers/media/platform/rockchip/rga/rga-buf.c
+++ b/drivers/media/platform/rockchip/rga/rga-buf.c
@@ -80,6 +80,9 @@ static int rga_buf_init(struct vb2_buffer *vb)
struct rga_frame *f = rga_get_frame(ctx, vb->vb2_queue->type);
size_t n_desc = 0;

+ if (IS_ERR(f))
+ return PTR_ERR(f);
+
n_desc = DIV_ROUND_UP(f->size, PAGE_SIZE);

rbuf->n_desc = n_desc;
diff --git a/drivers/media/platform/rockchip/rkisp1/rkisp1-params.c b/drivers/media/platform/rockchip/rkisp1/rkisp1-params.c
index 320581a9f866e..ba6fff91fa17a 100644
--- a/drivers/media/platform/rockchip/rkisp1/rkisp1-params.c
+++ b/drivers/media/platform/rockchip/rkisp1/rkisp1-params.c
@@ -407,12 +407,6 @@ static void rkisp1_flt_config(struct rkisp1_params *params,
rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_FILT_LUM_WEIGHT,
arg->lum_weight);

- rkisp1_write(params->rkisp1, RKISP1_CIF_ISP_FILT_MODE,
- (arg->mode ? RKISP1_CIF_ISP_FLT_MODE_DNR : 0) |
- RKISP1_CIF_ISP_FLT_CHROMA_V_MODE(arg->chr_v_mode) |
- RKISP1_CIF_ISP_FLT_CHROMA_H_MODE(arg->chr_h_mode) |
- RKISP1_CIF_ISP_FLT_GREEN_STAGE1(arg->grn_stage1));
-
/* avoid to override the old enable value */
filt_mode = rkisp1_read(params->rkisp1, RKISP1_CIF_ISP_FILT_MODE);
filt_mode &= RKISP1_CIF_ISP_FLT_ENA;
diff --git a/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c b/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c
index 0f6918f4db383..b461d86bed540 100644
--- a/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c
+++ b/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c
@@ -504,6 +504,9 @@ static void dcmipp_bytecap_stop_streaming(struct vb2_queue *vq)
/* Disable pipe */
reg_clear(vcap, DCMIPP_P0FSCR, DCMIPP_P0FSCR_PIPEN);

+ /* Clear any pending interrupts */
+ reg_write(vcap, DCMIPP_CMFCR, DCMIPP_CMIER_P0ALL);
+
spin_lock_irq(&vcap->irqlock);

/* Return all queued buffers to vb2 in ERROR state */
diff --git a/drivers/media/platform/ti/omap3isp/isppreview.c b/drivers/media/platform/ti/omap3isp/isppreview.c
index e383a57654de8..5c492b31b5160 100644
--- a/drivers/media/platform/ti/omap3isp/isppreview.c
+++ b/drivers/media/platform/ti/omap3isp/isppreview.c
@@ -1742,22 +1742,17 @@ static void preview_try_format(struct isp_prev_device *prev,

switch (pad) {
case PREV_PAD_SINK:
- /* When reading data from the CCDC, the input size has already
- * been mangled by the CCDC output pad so it can be accepted
- * as-is.
- *
- * When reading data from memory, clamp the requested width and
- * height. The TRM doesn't specify a minimum input height, make
+ /*
+ * Clamp the requested width and height.
+ * The TRM doesn't specify a minimum input height, make
* sure we got enough lines to enable the noise filter and color
* filter array interpolation.
*/
- if (prev->input == PREVIEW_INPUT_MEMORY) {
- fmt->width = clamp_t(u32, fmt->width, PREV_MIN_IN_WIDTH,
- preview_max_out_width(prev));
- fmt->height = clamp_t(u32, fmt->height,
- PREV_MIN_IN_HEIGHT,
- PREV_MAX_IN_HEIGHT);
- }
+ fmt->width = clamp_t(u32, fmt->width, PREV_MIN_IN_WIDTH,
+ preview_max_out_width(prev));
+ fmt->height = clamp_t(u32, fmt->height,
+ PREV_MIN_IN_HEIGHT,
+ PREV_MAX_IN_HEIGHT);

fmt->colorspace = V4L2_COLORSPACE_SRGB;

diff --git a/drivers/media/platform/ti/omap3isp/ispvideo.c b/drivers/media/platform/ti/omap3isp/ispvideo.c
index daca689dc0825..b9e0b6215fa04 100644
--- a/drivers/media/platform/ti/omap3isp/ispvideo.c
+++ b/drivers/media/platform/ti/omap3isp/ispvideo.c
@@ -148,12 +148,12 @@ static unsigned int isp_video_mbus_to_pix(const struct isp_video *video,
pix->width = mbus->width;
pix->height = mbus->height;

- for (i = 0; i < ARRAY_SIZE(formats); ++i) {
+ for (i = 0; i < ARRAY_SIZE(formats) - 1; ++i) {
if (formats[i].code == mbus->code)
break;
}

- if (WARN_ON(i == ARRAY_SIZE(formats)))
+ if (WARN_ON(i == ARRAY_SIZE(formats) - 1))
return 0;

min_bpl = pix->width * formats[i].bpp;
@@ -191,7 +191,7 @@ static void isp_video_pix_to_mbus(const struct v4l2_pix_format *pix,
/* Skip the last format in the loop so that it will be selected if no
* match is found.
*/
- for (i = 0; i < ARRAY_SIZE(formats) - 1; ++i) {
+ for (i = 0; i < ARRAY_SIZE(formats) - 2; ++i) {
if (formats[i].pixelformat == pix->pixelformat)
break;
}
@@ -1288,6 +1288,7 @@ static const struct v4l2_ioctl_ops isp_video_ioctl_ops = {
static int isp_video_open(struct file *file)
{
struct isp_video *video = video_drvdata(file);
+ struct v4l2_mbus_framefmt fmt;
struct isp_video_fh *handle;
struct vb2_queue *queue;
int ret = 0;
@@ -1329,6 +1330,13 @@ static int isp_video_open(struct file *file)

memset(&handle->format, 0, sizeof(handle->format));
handle->format.type = video->type;
+ handle->format.fmt.pix.width = 720;
+ handle->format.fmt.pix.height = 480;
+ handle->format.fmt.pix.pixelformat = V4L2_PIX_FMT_UYVY;
+ handle->format.fmt.pix.field = V4L2_FIELD_NONE;
+ handle->format.fmt.pix.colorspace = V4L2_COLORSPACE_SRGB;
+ isp_video_pix_to_mbus(&handle->format.fmt.pix, &fmt);
+ isp_video_mbus_to_pix(video, &fmt, &handle->format.fmt.pix);
handle->timeperframe.denominator = 1;

handle->video = video;
diff --git a/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c b/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
index e54f5fac325bd..1dfc754099e74 100644
--- a/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
+++ b/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
@@ -72,6 +72,14 @@
: AV1_DIV_ROUND_UP_POW2((_value_), (_n_))); \
})

+enum rockchip_av1_tx_mode {
+ ROCKCHIP_AV1_TX_MODE_ONLY_4X4 = 0,
+ ROCKCHIP_AV1_TX_MODE_8X8 = 1,
+ ROCKCHIP_AV1_TX_MODE_16x16 = 2,
+ ROCKCHIP_AV1_TX_MODE_32x32 = 3,
+ ROCKCHIP_AV1_TX_MODE_SELECT = 4,
+};
+
struct rockchip_av1_film_grain {
u8 scaling_lut_y[256];
u8 scaling_lut_cb[256];
@@ -373,12 +381,12 @@ int rockchip_vpu981_av1_dec_init(struct hantro_ctx *ctx)
return -ENOMEM;
av1_dec->global_model.size = GLOBAL_MODEL_SIZE;

- av1_dec->tile_info.cpu = dma_alloc_coherent(vpu->dev, AV1_MAX_TILES,
+ av1_dec->tile_info.cpu = dma_alloc_coherent(vpu->dev, AV1_TILE_INFO_SIZE,
&av1_dec->tile_info.dma,
GFP_KERNEL);
if (!av1_dec->tile_info.cpu)
return -ENOMEM;
- av1_dec->tile_info.size = AV1_MAX_TILES;
+ av1_dec->tile_info.size = AV1_TILE_INFO_SIZE;

av1_dec->film_grain.cpu = dma_alloc_coherent(vpu->dev,
ALIGN(sizeof(struct rockchip_av1_film_grain), 2048),
@@ -1398,8 +1406,16 @@ static void rockchip_vpu981_av1_dec_set_cdef(struct hantro_ctx *ctx)
u16 luma_sec_strength = 0;
u32 chroma_pri_strength = 0;
u16 chroma_sec_strength = 0;
+ bool enable_cdef;
int i;

+ enable_cdef = !(cdef->bits == 0 &&
+ cdef->damping_minus_3 == 0 &&
+ cdef->y_pri_strength[0] == 0 &&
+ cdef->y_sec_strength[0] == 0 &&
+ cdef->uv_pri_strength[0] == 0 &&
+ cdef->uv_sec_strength[0] == 0);
+ hantro_reg_write(vpu, &av1_enable_cdef, enable_cdef);
hantro_reg_write(vpu, &av1_cdef_bits, cdef->bits);
hantro_reg_write(vpu, &av1_cdef_damping, cdef->damping_minus_3);

@@ -1929,11 +1945,26 @@ static void rockchip_vpu981_av1_dec_set_reference_frames(struct hantro_ctx *ctx)
rockchip_vpu981_av1_dec_set_other_frames(ctx);
}

+static int rockchip_vpu981_av1_get_hardware_tx_mode(enum v4l2_av1_tx_mode tx_mode)
+{
+ switch (tx_mode) {
+ case V4L2_AV1_TX_MODE_ONLY_4X4:
+ return ROCKCHIP_AV1_TX_MODE_ONLY_4X4;
+ case V4L2_AV1_TX_MODE_LARGEST:
+ return ROCKCHIP_AV1_TX_MODE_32x32;
+ case V4L2_AV1_TX_MODE_SELECT:
+ return ROCKCHIP_AV1_TX_MODE_SELECT;
+ }
+
+ return ROCKCHIP_AV1_TX_MODE_32x32;
+}
+
static void rockchip_vpu981_av1_dec_set_parameters(struct hantro_ctx *ctx)
{
struct hantro_dev *vpu = ctx->dev;
struct hantro_av1_dec_hw_ctx *av1_dec = &ctx->av1_dec;
struct hantro_av1_dec_ctrls *ctrls = &av1_dec->ctrls;
+ int tx_mode;

hantro_reg_write(vpu, &av1_skip_mode,
!!(ctrls->frame->flags & V4L2_AV1_FRAME_FLAG_SKIP_MODE_PRESENT));
@@ -1955,8 +1986,6 @@ static void rockchip_vpu981_av1_dec_set_parameters(struct hantro_ctx *ctx)
!!(ctrls->frame->flags & V4L2_AV1_FRAME_FLAG_SHOW_FRAME));
hantro_reg_write(vpu, &av1_switchable_motion_mode,
!!(ctrls->frame->flags & V4L2_AV1_FRAME_FLAG_IS_MOTION_MODE_SWITCHABLE));
- hantro_reg_write(vpu, &av1_enable_cdef,
- !!(ctrls->sequence->flags & V4L2_AV1_SEQUENCE_FLAG_ENABLE_CDEF));
hantro_reg_write(vpu, &av1_allow_masked_compound,
!!(ctrls->sequence->flags
& V4L2_AV1_SEQUENCE_FLAG_ENABLE_MASKED_COMPOUND));
@@ -1991,7 +2020,7 @@ static void rockchip_vpu981_av1_dec_set_parameters(struct hantro_ctx *ctx)
!!(ctrls->frame->quantization.flags
& V4L2_AV1_QUANTIZATION_FLAG_DELTA_Q_PRESENT));

- hantro_reg_write(vpu, &av1_idr_pic_e, !ctrls->frame->frame_type);
+ hantro_reg_write(vpu, &av1_idr_pic_e, IS_INTRA(ctrls->frame->frame_type));
hantro_reg_write(vpu, &av1_quant_base_qindex, ctrls->frame->quantization.base_q_idx);
hantro_reg_write(vpu, &av1_bit_depth_y_minus8, ctx->bit_depth - 8);
hantro_reg_write(vpu, &av1_bit_depth_c_minus8, ctx->bit_depth - 8);
@@ -2001,7 +2030,9 @@ static void rockchip_vpu981_av1_dec_set_parameters(struct hantro_ctx *ctx)
!!(ctrls->frame->flags & V4L2_AV1_FRAME_FLAG_ALLOW_HIGH_PRECISION_MV));
hantro_reg_write(vpu, &av1_comp_pred_mode,
(ctrls->frame->flags & V4L2_AV1_FRAME_FLAG_REFERENCE_SELECT) ? 2 : 0);
- hantro_reg_write(vpu, &av1_transform_mode, (ctrls->frame->tx_mode == 1) ? 3 : 4);
+
+ tx_mode = rockchip_vpu981_av1_get_hardware_tx_mode(ctrls->frame->tx_mode);
+ hantro_reg_write(vpu, &av1_transform_mode, tx_mode);
hantro_reg_write(vpu, &av1_max_cb_size,
(ctrls->sequence->flags
& V4L2_AV1_SEQUENCE_FLAG_USE_128X128_SUPERBLOCK) ? 7 : 6);
diff --git a/drivers/media/radio/radio-keene.c b/drivers/media/radio/radio-keene.c
index a35648316aa8d..05d3c4b567211 100644
--- a/drivers/media/radio/radio-keene.c
+++ b/drivers/media/radio/radio-keene.c
@@ -338,7 +338,6 @@ static int usb_keene_probe(struct usb_interface *intf,
if (hdl->error) {
retval = hdl->error;

- v4l2_ctrl_handler_free(hdl);
goto err_v4l2;
}
retval = v4l2_device_register(&intf->dev, &radio->v4l2_dev);
@@ -384,6 +383,7 @@ static int usb_keene_probe(struct usb_interface *intf,
err_vdev:
v4l2_device_unregister(&radio->v4l2_dev);
err_v4l2:
+ v4l2_ctrl_handler_free(&radio->hdl);
kfree(radio->buffer);
kfree(radio);
err:
diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
index 2de104736f874..943d56b10251b 100644
--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
@@ -3709,6 +3709,11 @@ status);
"Failed to submit read-control URB status=%d",
status);
hdw->ctl_read_pend_flag = 0;
+ if (hdw->ctl_write_pend_flag) {
+ usb_unlink_urb(hdw->ctl_write_urb);
+ while (hdw->ctl_write_pend_flag)
+ wait_for_completion(&hdw->ctl_done);
+ }
goto done;
}
}
diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
index 57e6f9af536ff..57bccbf17f6dc 100644
--- a/drivers/media/usb/uvc/uvc_video.c
+++ b/drivers/media/usb/uvc/uvc_video.c
@@ -1842,7 +1842,7 @@ static int uvc_alloc_urb_buffers(struct uvc_streaming *stream,
npackets = UVC_MAX_PACKETS;

/* Retry allocations until one succeed. */
- for (; npackets > 1; npackets /= 2) {
+ for (; npackets > 0; npackets /= 2) {
stream->urb_size = psize * npackets;

for (i = 0; i < UVC_URBS; ++i) {
@@ -1867,6 +1867,7 @@ static int uvc_alloc_urb_buffers(struct uvc_streaming *stream,
uvc_dbg(stream->dev, VIDEO,
"Failed to allocate URB buffers (%u bytes per packet)\n",
psize);
+ stream->urb_size = 0;
return 0;
}

diff --git a/drivers/media/v4l2-core/v4l2-async.c b/drivers/media/v4l2-core/v4l2-async.c
index ee884a8221fbd..1c08bba9ecb91 100644
--- a/drivers/media/v4l2-core/v4l2-async.c
+++ b/drivers/media/v4l2-core/v4l2-async.c
@@ -343,7 +343,6 @@ static int v4l2_async_match_notify(struct v4l2_async_notifier *notifier,
struct v4l2_subdev *sd,
struct v4l2_async_connection *asc)
{
- struct v4l2_async_notifier *subdev_notifier;
bool registered = false;
int ret;

@@ -389,6 +388,25 @@ static int v4l2_async_match_notify(struct v4l2_async_notifier *notifier,
dev_dbg(notifier_dev(notifier), "v4l2-async: %s bound (ret %d)\n",
dev_name(sd->dev), ret);

+ return 0;
+
+err_call_unbind:
+ v4l2_async_nf_call_unbind(notifier, sd, asc);
+ list_del(&asc->asc_subdev_entry);
+
+err_unregister_subdev:
+ if (registered)
+ v4l2_device_unregister_subdev(sd);
+
+ return ret;
+}
+
+static int
+v4l2_async_nf_try_subdev_notifier(struct v4l2_async_notifier *notifier,
+ struct v4l2_subdev *sd)
+{
+ struct v4l2_async_notifier *subdev_notifier;
+
/*
* See if the sub-device has a notifier. If not, return here.
*/
@@ -404,16 +422,6 @@ static int v4l2_async_match_notify(struct v4l2_async_notifier *notifier,
subdev_notifier->parent = notifier;

return v4l2_async_nf_try_all_subdevs(subdev_notifier);
-
-err_call_unbind:
- v4l2_async_nf_call_unbind(notifier, sd, asc);
- list_del(&asc->asc_subdev_entry);
-
-err_unregister_subdev:
- if (registered)
- v4l2_device_unregister_subdev(sd);
-
- return ret;
}

/* Test all async sub-devices in a notifier for a match. */
@@ -445,6 +453,10 @@ v4l2_async_nf_try_all_subdevs(struct v4l2_async_notifier *notifier)
if (ret < 0)
return ret;

+ ret = v4l2_async_nf_try_subdev_notifier(notifier, sd);
+ if (ret < 0)
+ return ret;
+
/*
* v4l2_async_match_notify() may lead to registering a
* new notifier and thus changing the async subdevs
@@ -829,7 +841,11 @@ int __v4l2_async_register_subdev(struct v4l2_subdev *sd, struct module *module)
ret = v4l2_async_match_notify(notifier, v4l2_dev, sd,
asc);
if (ret)
- goto err_unbind;
+ goto err_unlock;
+
+ ret = v4l2_async_nf_try_subdev_notifier(notifier, sd);
+ if (ret)
+ goto err_unbind_one;

ret = v4l2_async_nf_try_complete(notifier);
if (ret)
@@ -853,9 +869,10 @@ int __v4l2_async_register_subdev(struct v4l2_subdev *sd, struct module *module)
if (subdev_notifier)
v4l2_async_nf_unbind_all_subdevs(subdev_notifier);

- if (asc)
- v4l2_async_unbind_subdev_one(notifier, asc);
+err_unbind_one:
+ v4l2_async_unbind_subdev_one(notifier, asc);

+err_unlock:
mutex_unlock(&list_lock);

sd->owner = NULL;
diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
index f9325bcce1b94..f08b8009eeea0 100644
--- a/drivers/mfd/Kconfig
+++ b/drivers/mfd/Kconfig
@@ -347,6 +347,17 @@ config MFD_CS47L92
help
Support for Cirrus Logic CS42L92, CS47L92 and CS47L93 Smart Codecs

+config MFD_TN48M_CPLD
+ tristate "Delta Networks TN48M switch CPLD driver"
+ depends on I2C
+ depends on ARCH_MVEBU || COMPILE_TEST
+ select MFD_SIMPLE_MFD_I2C
+ help
+ Select this option to enable support for Delta Networks TN48M switch
+ CPLD. It consists of reset and GPIO drivers. CPLD provides GPIOS-s
+ for the SFP slots as well as power supply related information.
+ SFP support depends on the GPIO driver being selected.
+
config PMIC_DA903X
bool "Dialog Semiconductor DA9030/DA9034 PMIC Support"
depends on I2C=y
@@ -1161,6 +1172,19 @@ config MFD_QCOM_RPM
Say M here if you want to include support for the Qualcomm RPM as a
module. This will build a module called "qcom_rpm".

+config MFD_SPACEMIT_P1
+ tristate "SpacemiT P1 PMIC"
+ depends on ARCH_SPACEMIT || COMPILE_TEST
+ depends on I2C
+ select I2C_K1
+ select MFD_SIMPLE_MFD_I2C
+ help
+ This option supports the I2C-based SpacemiT P1 PMIC, which
+ contains regulators, a power switch, GPIOs, an RTC, and more.
+ This option is selected when any of the supported sub-devices
+ is configured. The basic functionality is implemented by the
+ simple MFD I2C driver.
+
config MFD_SPMI_PMIC
tristate "Qualcomm SPMI PMICs"
depends on ARCH_QCOM || COMPILE_TEST
diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c
index 85ff8717d8504..91975536d14d2 100644
--- a/drivers/mfd/arizona-core.c
+++ b/drivers/mfd/arizona-core.c
@@ -1100,7 +1100,7 @@ int arizona_dev_init(struct arizona *arizona)
} else if (val & 0x01) {
ret = wm5102_clear_write_sequencer(arizona);
if (ret)
- return ret;
+ goto err_reset;
}
break;
default:
diff --git a/drivers/mfd/da9052-spi.c b/drivers/mfd/da9052-spi.c
index 80fc5c0cac2fb..be5f2b34e18ae 100644
--- a/drivers/mfd/da9052-spi.c
+++ b/drivers/mfd/da9052-spi.c
@@ -37,7 +37,7 @@ static int da9052_spi_probe(struct spi_device *spi)
spi_set_drvdata(spi, da9052);

config = da9052_regmap_config;
- config.write_flag_mask = 1;
+ config.read_flag_mask = 1;
config.reg_bits = 7;
config.pad_bits = 1;
config.val_bits = 8;
diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
index 5b1c13fb23468..a7feb82c28e3f 100644
--- a/drivers/mfd/intel-lpss-pci.c
+++ b/drivers/mfd/intel-lpss-pci.c
@@ -437,6 +437,19 @@ static const struct pci_device_id intel_lpss_pci_ids[] = {
{ PCI_VDEVICE(INTEL, 0x5ac4), (kernel_ulong_t)&bxt_spi_info },
{ PCI_VDEVICE(INTEL, 0x5ac6), (kernel_ulong_t)&bxt_spi_info },
{ PCI_VDEVICE(INTEL, 0x5aee), (kernel_ulong_t)&bxt_uart_info },
+ /* NVL-S */
+ { PCI_VDEVICE(INTEL, 0x6e28), (kernel_ulong_t)&bxt_uart_info },
+ { PCI_VDEVICE(INTEL, 0x6e29), (kernel_ulong_t)&bxt_uart_info },
+ { PCI_VDEVICE(INTEL, 0x6e2a), (kernel_ulong_t)&tgl_spi_info },
+ { PCI_VDEVICE(INTEL, 0x6e2b), (kernel_ulong_t)&tgl_spi_info },
+ { PCI_VDEVICE(INTEL, 0x6e4c), (kernel_ulong_t)&ehl_i2c_info },
+ { PCI_VDEVICE(INTEL, 0x6e4d), (kernel_ulong_t)&ehl_i2c_info },
+ { PCI_VDEVICE(INTEL, 0x6e4e), (kernel_ulong_t)&ehl_i2c_info },
+ { PCI_VDEVICE(INTEL, 0x6e4f), (kernel_ulong_t)&ehl_i2c_info },
+ { PCI_VDEVICE(INTEL, 0x6e5c), (kernel_ulong_t)&bxt_uart_info },
+ { PCI_VDEVICE(INTEL, 0x6e5e), (kernel_ulong_t)&tgl_spi_info },
+ { PCI_VDEVICE(INTEL, 0x6e7a), (kernel_ulong_t)&ehl_i2c_info },
+ { PCI_VDEVICE(INTEL, 0x6e7b), (kernel_ulong_t)&ehl_i2c_info },
/* ARL-H */
{ PCI_VDEVICE(INTEL, 0x7725), (kernel_ulong_t)&bxt_uart_info },
{ PCI_VDEVICE(INTEL, 0x7726), (kernel_ulong_t)&bxt_uart_info },
diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
index 7d14a1e7631ee..c55223ce4327a 100644
--- a/drivers/mfd/mfd-core.c
+++ b/drivers/mfd/mfd-core.c
@@ -22,6 +22,7 @@
#include <linux/regulator/consumer.h>

static LIST_HEAD(mfd_of_node_list);
+static DEFINE_MUTEX(mfd_of_node_mutex);

struct mfd_of_node_entry {
struct list_head list;
@@ -105,9 +106,11 @@ static int mfd_match_of_node_to_dev(struct platform_device *pdev,
u64 of_node_addr;

/* Skip if OF node has previously been allocated to a device */
- list_for_each_entry(of_entry, &mfd_of_node_list, list)
- if (of_entry->np == np)
- return -EAGAIN;
+ scoped_guard(mutex, &mfd_of_node_mutex) {
+ list_for_each_entry(of_entry, &mfd_of_node_list, list)
+ if (of_entry->np == np)
+ return -EAGAIN;
+ }

if (!cell->use_of_reg)
/* No of_reg defined - allocate first free compatible match */
@@ -129,7 +132,8 @@ static int mfd_match_of_node_to_dev(struct platform_device *pdev,

of_entry->dev = &pdev->dev;
of_entry->np = np;
- list_add_tail(&of_entry->list, &mfd_of_node_list);
+ scoped_guard(mutex, &mfd_of_node_mutex)
+ list_add_tail(&of_entry->list, &mfd_of_node_list);

of_node_get(np);
device_set_node(&pdev->dev, of_fwnode_handle(np));
@@ -286,11 +290,13 @@ static int mfd_add_device(struct device *parent, int id,
if (cell->swnode)
device_remove_software_node(&pdev->dev);
fail_of_entry:
- list_for_each_entry_safe(of_entry, tmp, &mfd_of_node_list, list)
- if (of_entry->dev == &pdev->dev) {
- list_del(&of_entry->list);
- kfree(of_entry);
- }
+ scoped_guard(mutex, &mfd_of_node_mutex) {
+ list_for_each_entry_safe(of_entry, tmp, &mfd_of_node_list, list)
+ if (of_entry->dev == &pdev->dev) {
+ list_del(&of_entry->list);
+ kfree(of_entry);
+ }
+ }
fail_alias:
regulator_bulk_unregister_supply_alias(&pdev->dev,
cell->parent_supplies,
@@ -360,11 +366,13 @@ static int mfd_remove_devices_fn(struct device *dev, void *data)
if (cell->swnode)
device_remove_software_node(&pdev->dev);

- list_for_each_entry_safe(of_entry, tmp, &mfd_of_node_list, list)
- if (of_entry->dev == &pdev->dev) {
- list_del(&of_entry->list);
- kfree(of_entry);
- }
+ scoped_guard(mutex, &mfd_of_node_mutex) {
+ list_for_each_entry_safe(of_entry, tmp, &mfd_of_node_list, list)
+ if (of_entry->dev == &pdev->dev) {
+ list_del(&of_entry->list);
+ kfree(of_entry);
+ }
+ }

regulator_bulk_unregister_supply_alias(dev, cell->parent_supplies,
cell->num_parent_supplies);
diff --git a/drivers/mfd/omap-usb-host.c b/drivers/mfd/omap-usb-host.c
index 6de7ba7523455..e25957c808124 100644
--- a/drivers/mfd/omap-usb-host.c
+++ b/drivers/mfd/omap-usb-host.c
@@ -819,8 +819,10 @@ static void usbhs_omap_remove(struct platform_device *pdev)
{
pm_runtime_disable(&pdev->dev);

- /* remove children */
- device_for_each_child(&pdev->dev, NULL, usbhs_omap_remove_child);
+ if (pdev->dev.of_node)
+ of_platform_depopulate(&pdev->dev);
+ else
+ device_for_each_child(&pdev->dev, NULL, usbhs_omap_remove_child);
}

static const struct dev_pm_ops usbhsomap_dev_pm_ops = {
diff --git a/drivers/mfd/qcom-pm8xxx.c b/drivers/mfd/qcom-pm8xxx.c
index 8b6285f687da5..0e490591177a2 100644
--- a/drivers/mfd/qcom-pm8xxx.c
+++ b/drivers/mfd/qcom-pm8xxx.c
@@ -579,17 +579,11 @@ static int pm8xxx_probe(struct platform_device *pdev)
return rc;
}

-static int pm8xxx_remove_child(struct device *dev, void *unused)
-{
- platform_device_unregister(to_platform_device(dev));
- return 0;
-}
-
static void pm8xxx_remove(struct platform_device *pdev)
{
struct pm_irq_chip *chip = platform_get_drvdata(pdev);

- device_for_each_child(&pdev->dev, NULL, pm8xxx_remove_child);
+ of_platform_depopulate(&pdev->dev);
irq_domain_remove(chip->irqdomain);
}

diff --git a/drivers/mfd/simple-mfd-i2c.c b/drivers/mfd/simple-mfd-i2c.c
index 6eda79533208a..908eae338fee0 100644
--- a/drivers/mfd/simple-mfd-i2c.c
+++ b/drivers/mfd/simple-mfd-i2c.c
@@ -83,11 +83,42 @@ static const struct simple_mfd_data maxim_max5970 = {
.mfd_cell_size = ARRAY_SIZE(max5970_cells),
};

+static const struct mfd_cell max77705_sensor_cells[] = {
+ { .name = "max77705-battery" },
+ { .name = "max77705-hwmon", },
+};
+
+static const struct simple_mfd_data maxim_mon_max77705 = {
+ .mfd_cell = max77705_sensor_cells,
+ .mfd_cell_size = ARRAY_SIZE(max77705_sensor_cells),
+};
+
+static const struct regmap_config spacemit_p1_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+};
+
+static const struct mfd_cell spacemit_p1_cells[] = {
+ { .name = "spacemit-p1-regulator", },
+ { .name = "spacemit-p1-rtc", },
+};
+
+static const struct simple_mfd_data spacemit_p1 = {
+ .regmap_config = &spacemit_p1_regmap_config,
+ .mfd_cell = spacemit_p1_cells,
+ .mfd_cell_size = ARRAY_SIZE(spacemit_p1_cells),
+};
+
static const struct of_device_id simple_mfd_i2c_of_match[] = {
+ { .compatible = "delta,tn48m-cpld" },
+ { .compatible = "fsl,ls1028aqds-fpga" },
+ { .compatible = "fsl,lx2160aqds-fpga" },
{ .compatible = "kontron,sl28cpld" },
- { .compatible = "silergy,sy7636a", .data = &silergy_sy7636a},
{ .compatible = "maxim,max5970", .data = &maxim_max5970},
{ .compatible = "maxim,max5978", .data = &maxim_max5970},
+ { .compatible = "maxim,max77705-battery", .data = &maxim_mon_max77705},
+ { .compatible = "silergy,sy7636a", .data = &silergy_sy7636a},
+ { .compatible = "spacemit,p1", .data = &spacemit_p1, },
{}
};
MODULE_DEVICE_TABLE(of, simple_mfd_i2c_of_match);
diff --git a/drivers/misc/bcm-vk/bcm_vk_msg.c b/drivers/misc/bcm-vk/bcm_vk_msg.c
index 1f42d1d5a630a..665a3888708ac 100644
--- a/drivers/misc/bcm-vk/bcm_vk_msg.c
+++ b/drivers/misc/bcm-vk/bcm_vk_msg.c
@@ -1010,6 +1010,9 @@ ssize_t bcm_vk_read(struct file *p_file,
struct device *dev = &vk->pdev->dev;
struct bcm_vk_msg_chan *chan = &vk->to_h_msg_chan;
struct bcm_vk_wkent *entry = NULL, *iter;
+ struct vk_msg_blk tmp_msg;
+ u32 tmp_usr_msg_id;
+ u32 tmp_blks;
u32 q_num;
u32 rsp_length;

@@ -1034,6 +1037,9 @@ ssize_t bcm_vk_read(struct file *p_file,
entry = iter;
} else {
/* buffer not big enough */
+ tmp_msg = iter->to_h_msg[0];
+ tmp_usr_msg_id = iter->usr_msg_id;
+ tmp_blks = iter->to_h_blks;
rc = -EMSGSIZE;
}
goto read_loop_exit;
@@ -1052,14 +1058,12 @@ ssize_t bcm_vk_read(struct file *p_file,

bcm_vk_free_wkent(dev, entry);
} else if (rc == -EMSGSIZE) {
- struct vk_msg_blk tmp_msg = entry->to_h_msg[0];
-
/*
* in this case, return just the first block, so
* that app knows what size it is looking for.
*/
- set_msg_id(&tmp_msg, entry->usr_msg_id);
- tmp_msg.size = entry->to_h_blks - 1;
+ set_msg_id(&tmp_msg, tmp_usr_msg_id);
+ tmp_msg.size = tmp_blks - 1;
if (copy_to_user(buf, &tmp_msg, VK_MSGQ_BLK_SIZE) != 0) {
dev_err(dev, "Error return 1st block in -EMSGSIZE\n");
rc = -EFAULT;
diff --git a/drivers/misc/eeprom/eeprom_93xx46.c b/drivers/misc/eeprom/eeprom_93xx46.c
index e2221be884458..cc7599f08b391 100644
--- a/drivers/misc/eeprom/eeprom_93xx46.c
+++ b/drivers/misc/eeprom/eeprom_93xx46.c
@@ -45,6 +45,7 @@ struct eeprom_93xx46_platform_data {
#define OP_START 0x4
#define OP_WRITE (OP_START | 0x1)
#define OP_READ (OP_START | 0x2)
+/* The following addresses are offset for the 1K EEPROM variant in 16-bit mode */
#define ADDR_EWDS 0x00
#define ADDR_ERAL 0x20
#define ADDR_EWEN 0x30
@@ -191,10 +192,7 @@ static int eeprom_93xx46_ew(struct eeprom_93xx46_dev *edev, int is_on)
bits = edev->addrlen + 3;

cmd_addr = OP_START << edev->addrlen;
- if (edev->pdata->flags & EE_ADDR8)
- cmd_addr |= (is_on ? ADDR_EWEN : ADDR_EWDS) << 1;
- else
- cmd_addr |= (is_on ? ADDR_EWEN : ADDR_EWDS);
+ cmd_addr |= (is_on ? ADDR_EWEN : ADDR_EWDS) << (edev->addrlen - 6);

if (has_quirk_instruction_length(edev)) {
cmd_addr <<= 2;
@@ -328,10 +326,7 @@ static int eeprom_93xx46_eral(struct eeprom_93xx46_dev *edev)
bits = edev->addrlen + 3;

cmd_addr = OP_START << edev->addrlen;
- if (edev->pdata->flags & EE_ADDR8)
- cmd_addr |= ADDR_ERAL << 1;
- else
- cmd_addr |= ADDR_ERAL;
+ cmd_addr |= ADDR_ERAL << (edev->addrlen - 6);

if (has_quirk_instruction_length(edev)) {
cmd_addr <<= 2;
diff --git a/drivers/most/core.c b/drivers/most/core.c
index da319d108ea1d..40d63e38fef54 100644
--- a/drivers/most/core.c
+++ b/drivers/most/core.c
@@ -1282,19 +1282,28 @@ int most_register_interface(struct most_interface *iface)
int id;
struct most_channel *c;

- if (!iface || !iface->enqueue || !iface->configure ||
- !iface->poison_channel || (iface->num_channels > MAX_CHANNELS))
+ if (!iface)
return -EINVAL;

+ device_initialize(iface->dev);
+
+ if (!iface->enqueue || !iface->configure || !iface->poison_channel ||
+ (iface->num_channels > MAX_CHANNELS)) {
+ put_device(iface->dev);
+ return -EINVAL;
+ }
+
id = ida_alloc(&mdev_id, GFP_KERNEL);
if (id < 0) {
dev_err(iface->dev, "Failed to allocate device ID\n");
+ put_device(iface->dev);
return id;
}

iface->p = kzalloc(sizeof(*iface->p), GFP_KERNEL);
if (!iface->p) {
ida_free(&mdev_id, id);
+ put_device(iface->dev);
return -ENOMEM;
}

@@ -1304,7 +1313,7 @@ int most_register_interface(struct most_interface *iface)
iface->dev->bus = &mostbus;
iface->dev->groups = interface_attr_groups;
dev_set_drvdata(iface->dev, iface);
- if (device_register(iface->dev)) {
+ if (device_add(iface->dev)) {
dev_err(iface->dev, "Failed to register interface device\n");
kfree(iface->p);
put_device(iface->dev);
diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c
index 443202b942e1f..5872f1dfe7016 100644
--- a/drivers/mtd/nand/raw/cadence-nand-controller.c
+++ b/drivers/mtd/nand/raw/cadence-nand-controller.c
@@ -1015,7 +1015,7 @@ static int cadence_nand_cdma_send(struct cdns_nand_ctrl *cdns_ctrl,
}

/* Send SDMA command and wait for finish. */
-static u32
+static int
cadence_nand_cdma_send_and_wait(struct cdns_nand_ctrl *cdns_ctrl,
u8 thread)
{
diff --git a/drivers/mtd/nand/raw/pl35x-nand-controller.c b/drivers/mtd/nand/raw/pl35x-nand-controller.c
index 2570fd0beea0d..13c187af5a86f 100644
--- a/drivers/mtd/nand/raw/pl35x-nand-controller.c
+++ b/drivers/mtd/nand/raw/pl35x-nand-controller.c
@@ -976,6 +976,7 @@ static int pl35x_nand_attach_chip(struct nand_chip *chip)
fallthrough;
case NAND_ECC_ENGINE_TYPE_NONE:
case NAND_ECC_ENGINE_TYPE_SOFT:
+ chip->ecc.write_page_raw = nand_monolithic_write_page_raw;
break;
case NAND_ECC_ENGINE_TYPE_ON_HOST:
ret = pl35x_nand_init_hw_ecc_controller(nfc, chip);
diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
index 57b8807b19482..48ac009cbaad2 100644
--- a/drivers/mtd/nand/spi/core.c
+++ b/drivers/mtd/nand/spi/core.c
@@ -793,6 +793,14 @@ static void spinand_cont_read_init(struct spinand_device *spinand)
(engine_type == NAND_ECC_ENGINE_TYPE_ON_DIE ||
engine_type == NAND_ECC_ENGINE_TYPE_NONE)) {
spinand->cont_read_possible = true;
+
+ /*
+ * Ensure continuous read is disabled on probe.
+ * Some devices retain this state across soft reset,
+ * which leaves the OOB area inaccessible and results
+ * in false positive returns from spinand_isbad().
+ */
+ spinand_cont_read_enable(spinand, false);
}
}

diff --git a/drivers/mtd/parsers/ofpart_core.c b/drivers/mtd/parsers/ofpart_core.c
index abfa687989182..09961c6f39496 100644
--- a/drivers/mtd/parsers/ofpart_core.c
+++ b/drivers/mtd/parsers/ofpart_core.c
@@ -77,6 +77,7 @@ static int parse_fixed_partitions(struct mtd_info *master,
of_id = of_match_node(parse_ofpart_match_table, ofpart_node);
if (dedicated && !of_id) {
/* The 'partitions' subnode might be used by another parser */
+ of_node_put(ofpart_node);
return 0;
}

@@ -91,12 +92,18 @@ static int parse_fixed_partitions(struct mtd_info *master,
nr_parts++;
}

- if (nr_parts == 0)
+ if (nr_parts == 0) {
+ if (dedicated)
+ of_node_put(ofpart_node);
return 0;
+ }

parts = kcalloc(nr_parts, sizeof(*parts), GFP_KERNEL);
- if (!parts)
+ if (!parts) {
+ if (dedicated)
+ of_node_put(ofpart_node);
return -ENOMEM;
+ }

i = 0;
for_each_child_of_node(ofpart_node, pp) {
@@ -175,6 +182,9 @@ static int parse_fixed_partitions(struct mtd_info *master,
if (quirks && quirks->post_parse)
quirks->post_parse(master, parts, nr_parts);

+ if (dedicated)
+ of_node_put(ofpart_node);
+
*pparts = parts;
return nr_parts;

@@ -183,6 +193,8 @@ static int parse_fixed_partitions(struct mtd_info *master,
master->name, pp, mtd_node);
ret = -EINVAL;
ofpart_none:
+ if (dedicated)
+ of_node_put(ofpart_node);
of_node_put(pp);
kfree(parts);
return ret;
diff --git a/drivers/mtd/parsers/tplink_safeloader.c b/drivers/mtd/parsers/tplink_safeloader.c
index e358a029dc70c..4fcaf92d22e4f 100644
--- a/drivers/mtd/parsers/tplink_safeloader.c
+++ b/drivers/mtd/parsers/tplink_safeloader.c
@@ -116,6 +116,7 @@ static int mtd_parser_tplink_safeloader_parse(struct mtd_info *mtd,
return idx;

err_free:
+ kfree(buf);
for (idx -= 1; idx >= 0; idx--)
kfree(parts[idx].name);
err_free_parts:
diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 209cab75ac0a5..dd1f8cad953bf 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -803,26 +803,29 @@ static int bond_update_speed_duplex(struct slave *slave)
struct ethtool_link_ksettings ecmd;
int res;

- slave->speed = SPEED_UNKNOWN;
- slave->duplex = DUPLEX_UNKNOWN;
-
res = __ethtool_get_link_ksettings(slave_dev, &ecmd);
if (res < 0)
- return 1;
+ goto speed_duplex_unknown;
if (ecmd.base.speed == 0 || ecmd.base.speed == ((__u32)-1))
- return 1;
+ goto speed_duplex_unknown;
switch (ecmd.base.duplex) {
case DUPLEX_FULL:
case DUPLEX_HALF:
break;
default:
- return 1;
+ goto speed_duplex_unknown;
}

slave->speed = ecmd.base.speed;
slave->duplex = ecmd.base.duplex;

return 0;
+
+speed_duplex_unknown:
+ slave->speed = SPEED_UNKNOWN;
+ slave->duplex = DUPLEX_UNKNOWN;
+
+ return 1;
}

const char *bond_slave_link_status(s8 link)
@@ -4475,9 +4478,13 @@ static int bond_close(struct net_device *bond_dev)

bond_work_cancel_all(bond);
bond->send_peer_notif = 0;
+ WRITE_ONCE(bond->recv_probe, NULL);
+
+ /* Wait for any in-flight RX handlers */
+ synchronize_net();
+
if (bond_is_lb(bond))
bond_alb_deinitialize(bond);
- bond->recv_probe = NULL;

if (bond_uses_primary(bond)) {
rcu_read_lock();
diff --git a/drivers/net/caif/caif_serial.c b/drivers/net/caif/caif_serial.c
index ed3a589def6b1..699ed0ff461e8 100644
--- a/drivers/net/caif/caif_serial.c
+++ b/drivers/net/caif/caif_serial.c
@@ -298,6 +298,7 @@ static void ser_release(struct work_struct *work)
{
struct list_head list;
struct ser_device *ser, *tmp;
+ struct tty_struct *tty;

spin_lock(&ser_lock);
list_replace_init(&ser_release_list, &list);
@@ -306,9 +307,11 @@ static void ser_release(struct work_struct *work)
if (!list_empty(&list)) {
rtnl_lock();
list_for_each_entry_safe(ser, tmp, &list, node) {
+ tty = ser->tty;
dev_close(ser->dev);
unregister_netdevice(ser->dev);
debugfs_deinit(ser);
+ tty_kref_put(tty);
}
rtnl_unlock();
}
@@ -369,8 +372,6 @@ static void ldisc_close(struct tty_struct *tty)
{
struct ser_device *ser = tty->disc_data;

- tty_kref_put(ser->tty);
-
spin_lock(&ser_lock);
list_move(&ser->node, &ser_release_list);
spin_unlock(&ser_lock);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 8799db6f3c866..7b84ed5c300e6 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -5991,6 +5991,9 @@ int bnxt_hwrm_cfa_ntuple_filter_free(struct bnxt *bp,
int rc;

set_bit(BNXT_FLTR_FW_DELETED, &fltr->base.state);
+ if (!test_bit(BNXT_STATE_OPEN, &bp->state))
+ return 0;
+
rc = hwrm_req_init(bp, req, HWRM_CFA_NTUPLE_FILTER_FREE);
if (rc)
return rc;
@@ -10407,12 +10410,10 @@ void bnxt_del_one_rss_ctx(struct bnxt *bp, struct bnxt_rss_ctx *rss_ctx,
struct bnxt_ntuple_filter *ntp_fltr;
int i;

- if (netif_running(bp->dev)) {
- bnxt_hwrm_vnic_free_one(bp, &rss_ctx->vnic);
- for (i = 0; i < BNXT_MAX_CTX_PER_VNIC; i++) {
- if (vnic->fw_rss_cos_lb_ctx[i] != INVALID_HW_RING_ID)
- bnxt_hwrm_vnic_ctx_free_one(bp, vnic, i);
- }
+ bnxt_hwrm_vnic_free_one(bp, &rss_ctx->vnic);
+ for (i = 0; i < BNXT_MAX_CTX_PER_VNIC; i++) {
+ if (vnic->fw_rss_cos_lb_ctx[i] != INVALID_HW_RING_ID)
+ bnxt_hwrm_vnic_ctx_free_one(bp, vnic, i);
}
if (!all)
return;
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 54ae90526d8ff..0a8f3dc3c2f01 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -1321,16 +1321,17 @@ static int bnxt_add_ntuple_cls_rule(struct bnxt *bp,
struct bnxt_l2_filter *l2_fltr;
struct bnxt_flow_masks *fmasks;
struct flow_keys *fkeys;
- u32 idx, ring;
+ u32 idx;
int rc;
- u8 vf;

if (!bp->vnic_info)
return -EAGAIN;

- vf = ethtool_get_flow_spec_ring_vf(fs->ring_cookie);
- ring = ethtool_get_flow_spec_ring(fs->ring_cookie);
- if ((fs->flow_type & (FLOW_MAC_EXT | FLOW_EXT)) || vf)
+ if (fs->flow_type & (FLOW_MAC_EXT | FLOW_EXT))
+ return -EOPNOTSUPP;
+
+ if (fs->ring_cookie != RX_CLS_FLOW_DISC &&
+ ethtool_get_flow_spec_ring_vf(fs->ring_cookie))
return -EOPNOTSUPP;

if (flow_type == IP_USER_FLOW) {
@@ -1454,7 +1455,7 @@ static int bnxt_add_ntuple_cls_rule(struct bnxt *bp,
if (fs->ring_cookie == RX_CLS_FLOW_DISC)
new_fltr->base.flags |= BNXT_ACT_DROP;
else
- new_fltr->base.rxq = ring;
+ new_fltr->base.rxq = ethtool_get_flow_spec_ring(fs->ring_cookie);
__set_bit(BNXT_FLTR_VALID, &new_fltr->base.state);
rc = bnxt_insert_ntp_filter(bp, new_fltr, idx);
if (!rc) {
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index 095c1cfcc9a1d..f258c4b82c74d 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -756,14 +756,12 @@ static void macb_mac_link_up(struct phylink_config *config,
if (rx_pause)
ctrl |= MACB_BIT(PAE);

- /* Initialize rings & buffers as clearing MACB_BIT(TE) in link down
- * cleared the pipeline and control registers.
- */
- macb_init_buffers(bp);
-
- for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue)
+ for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
+ queue->tx_head = 0;
+ queue->tx_tail = 0;
queue_writel(queue, IER,
bp->rx_intr_mask | MACB_TX_INT_FLAGS | MACB_BIT(HRESP));
+ }
}

macb_or_gem_writel(bp, NCFGR, ctrl);
@@ -2985,6 +2983,7 @@ static int macb_open(struct net_device *dev)
}

bp->macbgem_ops.mog_init_rings(bp);
+ macb_init_buffers(bp);

for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {
napi_enable(&queue->napi_rx);
diff --git a/drivers/net/ethernet/ec_bhf.c b/drivers/net/ethernet/ec_bhf.c
index 44af1d13d9317..4f1d7d216b5e3 100644
--- a/drivers/net/ethernet/ec_bhf.c
+++ b/drivers/net/ethernet/ec_bhf.c
@@ -424,7 +424,7 @@ static int ec_bhf_open(struct net_device *net_dev)

error_rx_free:
dma_free_coherent(dev, priv->rx_buf.alloc_len, priv->rx_buf.alloc,
- priv->rx_buf.alloc_len);
+ priv->rx_buf.alloc_phys);
out:
return err;
}
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
index 6ea58fc22783f..e78f400784770 100644
--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-switch.c
@@ -3033,6 +3033,13 @@ static int dpaa2_switch_init(struct fsl_mc_device *sw_dev)
goto err_close;
}

+ if (ethsw->sw_attr.num_ifs >= DPSW_MAX_IF) {
+ dev_err(dev, "DPSW num_ifs %u exceeds max %u\n",
+ ethsw->sw_attr.num_ifs, DPSW_MAX_IF);
+ err = -EINVAL;
+ goto err_close;
+ }
+
err = dpsw_get_api_version(ethsw->mc_io, 0,
&ethsw->major,
&ethsw->minor);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index b477bd286ed72..803da392c8efe 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -1048,13 +1048,13 @@ static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring)
int order;

if (!alloc_size)
- return;
+ goto not_init;

order = get_order(alloc_size);
if (order > MAX_PAGE_ORDER) {
if (net_ratelimit())
dev_warn(ring_to_dev(ring), "failed to allocate tx spare buffer, exceed to max order\n");
- return;
+ goto not_init;
}

tx_spare = devm_kzalloc(ring_to_dev(ring), sizeof(*tx_spare),
@@ -1092,6 +1092,13 @@ static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring)
devm_kfree(ring_to_dev(ring), tx_spare);
devm_kzalloc_error:
ring->tqp->handle->kinfo.tx_spare_buf_size = 0;
+not_init:
+ /* When driver init or reset_init, the ring->tx_spare is always NULL;
+ * but when called from hns3_set_ringparam, it's usually not NULL, and
+ * will be restored if hns3_init_all_ring() failed. So it's safe to set
+ * ring->tx_spare to NULL here.
+ */
+ ring->tx_spare = NULL;
}

/* Use hns3_tx_spare_space() to make sure there is enough buffer
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
index 416e02e7b995f..bc333d8710ac1 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h
@@ -727,8 +727,8 @@ struct hclge_fd_tcam_config_3_cmd {

#define HCLGE_FD_AD_DROP_B 0
#define HCLGE_FD_AD_DIRECT_QID_B 1
-#define HCLGE_FD_AD_QID_S 2
-#define HCLGE_FD_AD_QID_M GENMASK(11, 2)
+#define HCLGE_FD_AD_QID_L_S 2
+#define HCLGE_FD_AD_QID_L_M GENMASK(11, 2)
#define HCLGE_FD_AD_USE_COUNTER_B 12
#define HCLGE_FD_AD_COUNTER_NUM_S 13
#define HCLGE_FD_AD_COUNTER_NUM_M GENMASK(19, 13)
@@ -741,6 +741,7 @@ struct hclge_fd_tcam_config_3_cmd {
#define HCLGE_FD_AD_TC_OVRD_B 16
#define HCLGE_FD_AD_TC_SIZE_S 17
#define HCLGE_FD_AD_TC_SIZE_M GENMASK(20, 17)
+#define HCLGE_FD_AD_QID_H_B 21

struct hclge_fd_ad_config_cmd {
u8 stage;
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index 7468e03051ea4..434f40ce3062b 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -5689,11 +5689,13 @@ static int hclge_fd_ad_config(struct hclge_dev *hdev, u8 stage, int loc,
hnae3_set_field(ad_data, HCLGE_FD_AD_TC_SIZE_M,
HCLGE_FD_AD_TC_SIZE_S, (u32)action->tc_size);
}
+ hnae3_set_bit(ad_data, HCLGE_FD_AD_QID_H_B,
+ action->queue_id >= HCLGE_TQP_MAX_SIZE_DEV_V2 ? 1 : 0);
ad_data <<= 32;
hnae3_set_bit(ad_data, HCLGE_FD_AD_DROP_B, action->drop_packet);
hnae3_set_bit(ad_data, HCLGE_FD_AD_DIRECT_QID_B,
action->forward_to_direct_queue);
- hnae3_set_field(ad_data, HCLGE_FD_AD_QID_M, HCLGE_FD_AD_QID_S,
+ hnae3_set_field(ad_data, HCLGE_FD_AD_QID_L_M, HCLGE_FD_AD_QID_L_S,
action->queue_id);
hnae3_set_bit(ad_data, HCLGE_FD_AD_USE_COUNTER_B, action->use_counter);
hnae3_set_field(ad_data, HCLGE_FD_AD_COUNTER_NUM_M,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 2dc737c7e3fd8..31c83fc69cf41 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -74,7 +74,13 @@ static const struct pci_device_id i40e_pci_tbl[] = {
{PCI_VDEVICE(INTEL, I40E_DEV_ID_10G_BASE_T4), 0},
{PCI_VDEVICE(INTEL, I40E_DEV_ID_10G_BASE_T_BC), 0},
{PCI_VDEVICE(INTEL, I40E_DEV_ID_10G_SFP), 0},
- {PCI_VDEVICE(INTEL, I40E_DEV_ID_10G_B), 0},
+ /*
+ * This ID conflicts with ipw2200, but the devices can be differentiated
+ * because i40e devices use PCI_CLASS_NETWORK_ETHERNET and ipw2200
+ * devices use PCI_CLASS_NETWORK_OTHER.
+ */
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, I40E_DEV_ID_10G_B),
+ PCI_CLASS_NETWORK_ETHERNET << 8, 0xffff00, 0},
{PCI_VDEVICE(INTEL, I40E_DEV_ID_KX_X722), 0},
{PCI_VDEVICE(INTEL, I40E_DEV_ID_QSFP_X722), 0},
{PCI_VDEVICE(INTEL, I40E_DEV_ID_SFP_X722), 0},
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c b/drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c
index b5805969404fa..01e82d0b6b2cd 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c
@@ -307,7 +307,7 @@ static void octep_setup_iq_regs_cn93_pf(struct octep_device *oct, int iq_no)
}

/* Setup registers for a hardware Rx Queue */
-static void octep_setup_oq_regs_cn93_pf(struct octep_device *oct, int oq_no)
+static int octep_setup_oq_regs_cn93_pf(struct octep_device *oct, int oq_no)
{
u64 reg_val;
u64 oq_ctl = 0ULL;
@@ -355,6 +355,7 @@ static void octep_setup_oq_regs_cn93_pf(struct octep_device *oct, int oq_no)
reg_val = ((u64)time_threshold << 32) |
CFG_GET_OQ_INTR_PKT(oct->conf);
octep_write_csr64(oct, CN93_SDP_R_OUT_INT_LEVELS(oq_no), reg_val);
+ return 0;
}

/* Setup registers for a PF mailbox */
@@ -696,14 +697,26 @@ static void octep_enable_interrupts_cn93_pf(struct octep_device *oct)
/* Disable all interrupts */
static void octep_disable_interrupts_cn93_pf(struct octep_device *oct)
{
- u64 intr_mask = 0ULL;
+ u64 reg_val, intr_mask = 0ULL;
int srn, num_rings, i;

srn = CFG_GET_PORTS_PF_SRN(oct->conf);
num_rings = CFG_GET_PORTS_ACTIVE_IO_RINGS(oct->conf);

- for (i = 0; i < num_rings; i++)
- intr_mask |= (0x1ULL << (srn + i));
+ for (i = 0; i < num_rings; i++) {
+ intr_mask |= BIT_ULL(srn + i);
+ reg_val = octep_read_csr64(oct,
+ CN93_SDP_R_IN_INT_LEVELS(srn + i));
+ reg_val &= ~CN93_INT_ENA_BIT;
+ octep_write_csr64(oct,
+ CN93_SDP_R_IN_INT_LEVELS(srn + i), reg_val);
+
+ reg_val = octep_read_csr64(oct,
+ CN93_SDP_R_OUT_INT_LEVELS(srn + i));
+ reg_val &= ~CN93_INT_ENA_BIT;
+ octep_write_csr64(oct,
+ CN93_SDP_R_OUT_INT_LEVELS(srn + i), reg_val);
+ }

octep_write_csr64(oct, CN93_SDP_EPF_IRERR_RINT_ENA_W1C, intr_mask);
octep_write_csr64(oct, CN93_SDP_EPF_ORERR_RINT_ENA_W1C, intr_mask);
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_cnxk_pf.c b/drivers/net/ethernet/marvell/octeon_ep/octep_cnxk_pf.c
index 5de0b5ecbc5fd..09a3f1d0645b8 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_cnxk_pf.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_cnxk_pf.c
@@ -8,6 +8,7 @@
#include <linux/pci.h>
#include <linux/netdevice.h>
#include <linux/etherdevice.h>
+#include <linux/jiffies.h>

#include "octep_config.h"
#include "octep_main.h"
@@ -327,12 +328,14 @@ static void octep_setup_iq_regs_cnxk_pf(struct octep_device *oct, int iq_no)
}

/* Setup registers for a hardware Rx Queue */
-static void octep_setup_oq_regs_cnxk_pf(struct octep_device *oct, int oq_no)
+static int octep_setup_oq_regs_cnxk_pf(struct octep_device *oct, int oq_no)
{
- u64 reg_val;
- u64 oq_ctl = 0ULL;
- u32 time_threshold = 0;
struct octep_oq *oq = oct->oq[oq_no];
+ unsigned long t_out_jiffies;
+ u32 time_threshold = 0;
+ u64 oq_ctl = 0ULL;
+ u64 reg_ba_val;
+ u64 reg_val;

oq_no += CFG_GET_PORTS_PF_SRN(oct->conf);
reg_val = octep_read_csr64(oct, CNXK_SDP_R_OUT_CONTROL(oq_no));
@@ -343,6 +346,36 @@ static void octep_setup_oq_regs_cnxk_pf(struct octep_device *oct, int oq_no)
reg_val = octep_read_csr64(oct, CNXK_SDP_R_OUT_CONTROL(oq_no));
} while (!(reg_val & CNXK_R_OUT_CTL_IDLE));
}
+ octep_write_csr64(oct, CNXK_SDP_R_OUT_WMARK(oq_no), oq->max_count);
+ /* Wait for WMARK to get applied */
+ usleep_range(10, 15);
+
+ octep_write_csr64(oct, CNXK_SDP_R_OUT_SLIST_BADDR(oq_no),
+ oq->desc_ring_dma);
+ octep_write_csr64(oct, CNXK_SDP_R_OUT_SLIST_RSIZE(oq_no),
+ oq->max_count);
+ reg_ba_val = octep_read_csr64(oct, CNXK_SDP_R_OUT_SLIST_BADDR(oq_no));
+
+ if (reg_ba_val != oq->desc_ring_dma) {
+ t_out_jiffies = jiffies + 10 * HZ;
+ do {
+ if (reg_ba_val == ULLONG_MAX)
+ return -EFAULT;
+ octep_write_csr64(oct,
+ CNXK_SDP_R_OUT_SLIST_BADDR(oq_no),
+ oq->desc_ring_dma);
+ octep_write_csr64(oct,
+ CNXK_SDP_R_OUT_SLIST_RSIZE(oq_no),
+ oq->max_count);
+ reg_ba_val =
+ octep_read_csr64(oct,
+ CNXK_SDP_R_OUT_SLIST_BADDR(oq_no));
+ } while ((reg_ba_val != oq->desc_ring_dma) &&
+ time_before(jiffies, t_out_jiffies));
+
+ if (reg_ba_val != oq->desc_ring_dma)
+ return -EAGAIN;
+ }

reg_val &= ~(CNXK_R_OUT_CTL_IMODE);
reg_val &= ~(CNXK_R_OUT_CTL_ROR_P);
@@ -356,10 +389,6 @@ static void octep_setup_oq_regs_cnxk_pf(struct octep_device *oct, int oq_no)
reg_val |= (CNXK_R_OUT_CTL_ES_P);

octep_write_csr64(oct, CNXK_SDP_R_OUT_CONTROL(oq_no), reg_val);
- octep_write_csr64(oct, CNXK_SDP_R_OUT_SLIST_BADDR(oq_no),
- oq->desc_ring_dma);
- octep_write_csr64(oct, CNXK_SDP_R_OUT_SLIST_RSIZE(oq_no),
- oq->max_count);

oq_ctl = octep_read_csr64(oct, CNXK_SDP_R_OUT_CONTROL(oq_no));

@@ -385,6 +414,7 @@ static void octep_setup_oq_regs_cnxk_pf(struct octep_device *oct, int oq_no)
reg_val &= ~0xFFFFFFFFULL;
reg_val |= CFG_GET_OQ_WMARK(oct->conf);
octep_write_csr64(oct, CNXK_SDP_R_OUT_WMARK(oq_no), reg_val);
+ return 0;
}

/* Setup registers for a PF mailbox */
@@ -720,14 +750,26 @@ static void octep_enable_interrupts_cnxk_pf(struct octep_device *oct)
/* Disable all interrupts */
static void octep_disable_interrupts_cnxk_pf(struct octep_device *oct)
{
- u64 intr_mask = 0ULL;
+ u64 reg_val, intr_mask = 0ULL;
int srn, num_rings, i;

srn = CFG_GET_PORTS_PF_SRN(oct->conf);
num_rings = CFG_GET_PORTS_ACTIVE_IO_RINGS(oct->conf);

- for (i = 0; i < num_rings; i++)
- intr_mask |= (0x1ULL << (srn + i));
+ for (i = 0; i < num_rings; i++) {
+ intr_mask |= BIT_ULL(srn + i);
+ reg_val = octep_read_csr64(oct,
+ CNXK_SDP_R_IN_INT_LEVELS(srn + i));
+ reg_val &= ~CNXK_INT_ENA_BIT;
+ octep_write_csr64(oct,
+ CNXK_SDP_R_IN_INT_LEVELS(srn + i), reg_val);
+
+ reg_val = octep_read_csr64(oct,
+ CNXK_SDP_R_OUT_INT_LEVELS(srn + i));
+ reg_val &= ~CNXK_INT_ENA_BIT;
+ octep_write_csr64(oct,
+ CNXK_SDP_R_OUT_INT_LEVELS(srn + i), reg_val);
+ }

octep_write_csr64(oct, CNXK_SDP_EPF_IRERR_RINT_ENA_W1C, intr_mask);
octep_write_csr64(oct, CNXK_SDP_EPF_ORERR_RINT_ENA_W1C, intr_mask);
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.h b/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
index 936b786f42816..c063c8451d47a 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
@@ -77,7 +77,7 @@ struct octep_pci_win_regs {

struct octep_hw_ops {
void (*setup_iq_regs)(struct octep_device *oct, int q);
- void (*setup_oq_regs)(struct octep_device *oct, int q);
+ int (*setup_oq_regs)(struct octep_device *oct, int q);
void (*setup_mbox_regs)(struct octep_device *oct, int mbox);

irqreturn_t (*mbox_intr_handler)(void *ioq_vector);
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h b/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h
index ca473502d7a02..95f1dfff90cce 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h
@@ -386,5 +386,6 @@
#define CN93_PEM_BAR4_INDEX 7
#define CN93_PEM_BAR4_INDEX_SIZE 0x400000ULL
#define CN93_PEM_BAR4_INDEX_OFFSET (CN93_PEM_BAR4_INDEX * CN93_PEM_BAR4_INDEX_SIZE)
+#define CN93_INT_ENA_BIT BIT_ULL(62)

#endif /* _OCTEP_REGS_CN9K_PF_H_ */
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cnxk_pf.h b/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cnxk_pf.h
index e637d7c8224d4..4d172a552f80c 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cnxk_pf.h
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cnxk_pf.h
@@ -412,5 +412,6 @@
#define CNXK_PEM_BAR4_INDEX 7
#define CNXK_PEM_BAR4_INDEX_SIZE 0x400000ULL
#define CNXK_PEM_BAR4_INDEX_OFFSET (CNXK_PEM_BAR4_INDEX * CNXK_PEM_BAR4_INDEX_SIZE)
+#define CNXK_INT_ENA_BIT BIT_ULL(62)

#endif /* _OCTEP_REGS_CNXK_PF_H_ */
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
index 82b6b19e76b47..f2a7c6a76c742 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
@@ -12,6 +12,8 @@
#include "octep_config.h"
#include "octep_main.h"

+static void octep_oq_free_ring_buffers(struct octep_oq *oq);
+
static void octep_oq_reset_indices(struct octep_oq *oq)
{
oq->host_read_idx = 0;
@@ -170,11 +172,15 @@ static int octep_setup_oq(struct octep_device *oct, int q_no)
goto oq_fill_buff_err;

octep_oq_reset_indices(oq);
- oct->hw_ops.setup_oq_regs(oct, q_no);
+ if (oct->hw_ops.setup_oq_regs(oct, q_no))
+ goto oq_setup_err;
+
oct->num_oqs++;

return 0;

+oq_setup_err:
+ octep_oq_free_ring_buffers(oq);
oq_fill_buff_err:
vfree(oq->buff_info);
oq->buff_info = NULL;
diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_cn9k.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_cn9k.c
index 88937fce75f14..4c769b27c2789 100644
--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_cn9k.c
+++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_cn9k.c
@@ -196,7 +196,7 @@ static void octep_vf_setup_iq_regs_cn93(struct octep_vf_device *oct, int iq_no)
}

/* Setup registers for a hardware Rx Queue */
-static void octep_vf_setup_oq_regs_cn93(struct octep_vf_device *oct, int oq_no)
+static int octep_vf_setup_oq_regs_cn93(struct octep_vf_device *oct, int oq_no)
{
struct octep_vf_oq *oq = oct->oq[oq_no];
u32 time_threshold = 0;
@@ -239,6 +239,7 @@ static void octep_vf_setup_oq_regs_cn93(struct octep_vf_device *oct, int oq_no)
time_threshold = CFG_GET_OQ_INTR_TIME(oct->conf);
reg_val = ((u64)time_threshold << 32) | CFG_GET_OQ_INTR_PKT(oct->conf);
octep_vf_write_csr64(oct, CN93_VF_SDP_R_OUT_INT_LEVELS(oq_no), reg_val);
+ return 0;
}

/* Setup registers for a VF mailbox */
diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_cnxk.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_cnxk.c
index 1f79dfad42c62..a968b93a67943 100644
--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_cnxk.c
+++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_cnxk.c
@@ -199,11 +199,13 @@ static void octep_vf_setup_iq_regs_cnxk(struct octep_vf_device *oct, int iq_no)
}

/* Setup registers for a hardware Rx Queue */
-static void octep_vf_setup_oq_regs_cnxk(struct octep_vf_device *oct, int oq_no)
+static int octep_vf_setup_oq_regs_cnxk(struct octep_vf_device *oct, int oq_no)
{
struct octep_vf_oq *oq = oct->oq[oq_no];
+ unsigned long t_out_jiffies;
u32 time_threshold = 0;
u64 oq_ctl = ULL(0);
+ u64 reg_ba_val;
u64 reg_val;

reg_val = octep_vf_read_csr64(oct, CNXK_VF_SDP_R_OUT_CONTROL(oq_no));
@@ -214,6 +216,38 @@ static void octep_vf_setup_oq_regs_cnxk(struct octep_vf_device *oct, int oq_no)
reg_val = octep_vf_read_csr64(oct, CNXK_VF_SDP_R_OUT_CONTROL(oq_no));
} while (!(reg_val & CNXK_VF_R_OUT_CTL_IDLE));
}
+ octep_vf_write_csr64(oct, CNXK_VF_SDP_R_OUT_WMARK(oq_no),
+ oq->max_count);
+ /* Wait for WMARK to get applied */
+ usleep_range(10, 15);
+
+ octep_vf_write_csr64(oct, CNXK_VF_SDP_R_OUT_SLIST_BADDR(oq_no),
+ oq->desc_ring_dma);
+ octep_vf_write_csr64(oct, CNXK_VF_SDP_R_OUT_SLIST_RSIZE(oq_no),
+ oq->max_count);
+ reg_ba_val = octep_vf_read_csr64(oct,
+ CNXK_VF_SDP_R_OUT_SLIST_BADDR(oq_no));
+ if (reg_ba_val != oq->desc_ring_dma) {
+ t_out_jiffies = jiffies + 10 * HZ;
+ do {
+ if (reg_ba_val == ULLONG_MAX)
+ return -EFAULT;
+ octep_vf_write_csr64(oct,
+ CNXK_VF_SDP_R_OUT_SLIST_BADDR
+ (oq_no), oq->desc_ring_dma);
+ octep_vf_write_csr64(oct,
+ CNXK_VF_SDP_R_OUT_SLIST_RSIZE
+ (oq_no), oq->max_count);
+ reg_ba_val =
+ octep_vf_read_csr64(oct,
+ CNXK_VF_SDP_R_OUT_SLIST_BADDR
+ (oq_no));
+ } while ((reg_ba_val != oq->desc_ring_dma) &&
+ time_before(jiffies, t_out_jiffies));
+
+ if (reg_ba_val != oq->desc_ring_dma)
+ return -EAGAIN;
+ }

reg_val &= ~(CNXK_VF_R_OUT_CTL_IMODE);
reg_val &= ~(CNXK_VF_R_OUT_CTL_ROR_P);
@@ -227,8 +261,6 @@ static void octep_vf_setup_oq_regs_cnxk(struct octep_vf_device *oct, int oq_no)
reg_val |= (CNXK_VF_R_OUT_CTL_ES_P);

octep_vf_write_csr64(oct, CNXK_VF_SDP_R_OUT_CONTROL(oq_no), reg_val);
- octep_vf_write_csr64(oct, CNXK_VF_SDP_R_OUT_SLIST_BADDR(oq_no), oq->desc_ring_dma);
- octep_vf_write_csr64(oct, CNXK_VF_SDP_R_OUT_SLIST_RSIZE(oq_no), oq->max_count);

oq_ctl = octep_vf_read_csr64(oct, CNXK_VF_SDP_R_OUT_CONTROL(oq_no));
/* Clear the ISIZE and BSIZE (22-0) */
@@ -250,6 +282,7 @@ static void octep_vf_setup_oq_regs_cnxk(struct octep_vf_device *oct, int oq_no)
reg_val &= ~GENMASK_ULL(31, 0);
reg_val |= CFG_GET_OQ_WMARK(oct->conf);
octep_vf_write_csr64(oct, CNXK_VF_SDP_R_OUT_WMARK(oq_no), reg_val);
+ return 0;
}

/* Setup registers for a VF mailbox */
diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h
index 1a352f41f823c..4ee6b4d568ede 100644
--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h
+++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h
@@ -55,7 +55,7 @@ struct octep_vf_mmio {

struct octep_vf_hw_ops {
void (*setup_iq_regs)(struct octep_vf_device *oct, int q);
- void (*setup_oq_regs)(struct octep_vf_device *oct, int q);
+ int (*setup_oq_regs)(struct octep_vf_device *oct, int q);
void (*setup_mbox_regs)(struct octep_vf_device *oct, int mbox);

irqreturn_t (*non_ioq_intr_handler)(void *ioq_vector);
diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
index d70c8be3cfc40..6f865dbbba6c6 100644
--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
+++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
@@ -12,6 +12,8 @@
#include "octep_vf_config.h"
#include "octep_vf_main.h"

+static void octep_vf_oq_free_ring_buffers(struct octep_vf_oq *oq);
+
static void octep_vf_oq_reset_indices(struct octep_vf_oq *oq)
{
oq->host_read_idx = 0;
@@ -171,11 +173,15 @@ static int octep_vf_setup_oq(struct octep_vf_device *oct, int q_no)
goto oq_fill_buff_err;

octep_vf_oq_reset_indices(oq);
- oct->hw_ops.setup_oq_regs(oct, q_no);
+ if (oct->hw_ops.setup_oq_regs(oct, q_no))
+ goto oq_setup_err;
+
oct->num_oqs++;

return 0;

+oq_setup_err:
+ octep_vf_oq_free_ring_buffers(oq);
oq_fill_buff_err:
vfree(oq->buff_info);
oq->buff_info = NULL;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
index ca74bf6843369..394061a3d38c7 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
@@ -1791,6 +1791,8 @@ static int cgx_lmac_exit(struct cgx *cgx)
cgx->mac_ops->mac_pause_frm_config(cgx, lmac->lmac_id, false);
cgx_configure_interrupt(cgx, lmac, lmac->lmac_id, true);
kfree(lmac->mac_to_index_bmap.bmap);
+ rvu_free_bitmap(&lmac->rx_fc_pfvf_bmap);
+ rvu_free_bitmap(&lmac->tx_fc_pfvf_bmap);
kfree(lmac->name);
kfree(lmac);
}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
index 74201e0210bbf..d5e2ebedd433e 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
@@ -3529,11 +3529,22 @@ static void rvu_remove(struct pci_dev *pdev)
devm_kfree(&pdev->dev, rvu);
}

+static void rvu_shutdown(struct pci_dev *pdev)
+{
+ struct rvu *rvu = pci_get_drvdata(pdev);
+
+ if (!rvu)
+ return;
+
+ rvu_clear_rvum_blk_revid(rvu);
+}
+
static struct pci_driver rvu_driver = {
.name = DRV_NAME,
.id_table = rvu_id_table,
.probe = rvu_probe,
.remove = rvu_remove,
+ .shutdown = rvu_shutdown,
};

static int __init rvu_init_module(void)
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
index da69350c6f765..0a23e0a918280 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c
@@ -4861,12 +4861,18 @@ static int rvu_nix_block_init(struct rvu *rvu, struct nix_hw *nix_hw)
/* Set chan/link to backpressure TL3 instead of TL2 */
rvu_write64(rvu, blkaddr, NIX_AF_PSE_CHANNEL_LEVEL, 0x01);

- /* Disable SQ manager's sticky mode operation (set TM6 = 0)
+ /* Disable SQ manager's sticky mode operation (set TM6 = 0, TM11 = 0)
* This sticky mode is known to cause SQ stalls when multiple
- * SQs are mapped to same SMQ and transmitting pkts at a time.
+ * SQs are mapped to same SMQ and transmitting pkts simultaneously.
+ * NIX PSE may deadlock when there are any sticky to non-sticky
+ * transmission. Hence disable it (TM5 = 0).
*/
cfg = rvu_read64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS);
- cfg &= ~BIT_ULL(15);
+ cfg &= ~(BIT_ULL(15) | BIT_ULL(14) | BIT_ULL(23));
+ /* NIX may drop credits when condition clocks are turned off.
+ * Hence enable control flow clk (set TM9 = 1).
+ */
+ cfg |= BIT_ULL(21);
rvu_write64(rvu, blkaddr, NIX_AF_SQM_DBG_CTL_STATUS, cfg);

ltdefs = rvu->kpu.lt_def;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
index 97722ce8c4cb3..a78923d7811dc 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc.c
@@ -1058,32 +1058,35 @@ void rvu_npc_update_flowkey_alg_idx(struct rvu *rvu, u16 pcifunc, int nixlf,
rvu_write64(rvu, blkaddr,
NPC_AF_MCAMEX_BANKX_ACTION(index, bank), *(u64 *)&action);

- /* update the VF flow rule action with the VF default entry action */
- if (mcam_index < 0)
- npc_update_vf_flow_entry(rvu, mcam, blkaddr, pcifunc,
- *(u64 *)&action);
-
/* update the action change in default rule */
pfvf = rvu_get_pfvf(rvu, pcifunc);
if (pfvf->def_ucast_rule)
pfvf->def_ucast_rule->rx_action = action;

- index = npc_get_nixlf_mcam_index(mcam, pcifunc,
- nixlf, NIXLF_PROMISC_ENTRY);
+ if (mcam_index < 0) {
+ /* update the VF flow rule action with the VF default
+ * entry action
+ */
+ npc_update_vf_flow_entry(rvu, mcam, blkaddr, pcifunc,
+ *(u64 *)&action);

- /* If PF's promiscuous entry is enabled,
- * Set RSS action for that entry as well
- */
- npc_update_rx_action_with_alg_idx(rvu, action, pfvf, index, blkaddr,
- alg_idx);
+ index = npc_get_nixlf_mcam_index(mcam, pcifunc,
+ nixlf, NIXLF_PROMISC_ENTRY);

- index = npc_get_nixlf_mcam_index(mcam, pcifunc,
- nixlf, NIXLF_ALLMULTI_ENTRY);
- /* If PF's allmulti entry is enabled,
- * Set RSS action for that entry as well
- */
- npc_update_rx_action_with_alg_idx(rvu, action, pfvf, index, blkaddr,
- alg_idx);
+ /* If PF's promiscuous entry is enabled,
+ * Set RSS action for that entry as well
+ */
+ npc_update_rx_action_with_alg_idx(rvu, action, pfvf, index,
+ blkaddr, alg_idx);
+
+ index = npc_get_nixlf_mcam_index(mcam, pcifunc,
+ nixlf, NIXLF_ALLMULTI_ENTRY);
+ /* If PF's allmulti entry is enabled,
+ * Set RSS action for that entry as well
+ */
+ npc_update_rx_action_with_alg_idx(rvu, action, pfvf, index,
+ blkaddr, alg_idx);
+ }
}

void npc_enadis_default_mce_entry(struct rvu *rvu, u16 pcifunc,
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
index 5492dea547a19..2de9c44ef57c7 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
@@ -3101,6 +3101,7 @@ static int otx2_probe(struct pci_dev *pdev, const struct pci_device_id *id)
return 0;

err_pf_sriov_init:
+ otx2_unregister_dl(pf);
otx2_shutdown_tc(pf);
err_mcam_flow_del:
otx2_mcam_flow_del(pf);
diff --git a/drivers/net/ethernet/marvell/skge.c b/drivers/net/ethernet/marvell/skge.c
index fcfb34561882b..7d398c16ea720 100644
--- a/drivers/net/ethernet/marvell/skge.c
+++ b/drivers/net/ethernet/marvell/skge.c
@@ -78,7 +78,6 @@ static const struct pci_device_id skge_id_table[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, 0x4320) }, /* SK-98xx V2.0 */
{ PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4b01) }, /* D-Link DGE-530T (rev.B) */
{ PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4c00) }, /* D-Link DGE-530T */
- { PCI_DEVICE(PCI_VENDOR_ID_DLINK, 0x4302) }, /* D-Link DGE-530T Rev C1 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4320) }, /* Marvell Yukon 88E8001/8003/8010 */
{ PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x5005) }, /* Belkin */
{ PCI_DEVICE(PCI_VENDOR_ID_CNET, 0x434E) }, /* CNet PowerG-2000 */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 8245a149cdf85..0e18906168313 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -176,7 +176,8 @@ static inline u16 mlx5_min_rx_wqes(int wq_type, u32 wq_size)
}

/* Use this function to get max num channels (rxqs/txqs) only to create netdev */
-static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
+static inline unsigned int
+mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
{
return is_kdump_kernel() ?
MLX5E_MIN_NUM_CHANNELS :
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
index 7e24f3f0b4dd3..486f05112f5a6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
@@ -36,6 +36,7 @@
#include <linux/inetdevice.h>
#include <linux/netdevice.h>
#include <net/netevent.h>
+#include <net/ipv6_stubs.h>

#include "en.h"
#include "eswitch.h"
@@ -265,10 +266,15 @@ static void mlx5e_ipsec_init_limits(struct mlx5e_ipsec_sa_entry *sa_entry,
static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry,
struct mlx5_accel_esp_xfrm_attrs *attrs)
{
- struct mlx5_core_dev *mdev = mlx5e_ipsec_sa2dev(sa_entry);
+ struct mlx5e_ipsec_addr *addrs = &attrs->addrs;
struct net_device *netdev = sa_entry->dev;
+ struct xfrm_state *x = sa_entry->x;
+ struct dst_entry *rt_dst_entry;
+ struct flowi4 fl4 = {};
+ struct flowi6 fl6 = {};
struct neighbour *n;
u8 addr[ETH_ALEN];
+ struct rtable *rt;
const void *pkey;
u8 *dst, *src;

@@ -276,23 +282,94 @@ static void mlx5e_ipsec_init_macs(struct mlx5e_ipsec_sa_entry *sa_entry,
attrs->type != XFRM_DEV_OFFLOAD_PACKET)
return;

- mlx5_query_mac_address(mdev, addr);
+ ether_addr_copy(addr, netdev->dev_addr);
switch (attrs->dir) {
case XFRM_DEV_OFFLOAD_IN:
src = attrs->dmac;
dst = attrs->smac;
- pkey = &attrs->saddr.a4;
+
+ switch (addrs->family) {
+ case AF_INET:
+ fl4.flowi4_proto = x->sel.proto;
+ fl4.daddr = addrs->saddr.a4;
+ fl4.saddr = addrs->daddr.a4;
+ pkey = &addrs->saddr.a4;
+ break;
+ case AF_INET6:
+ fl6.flowi6_proto = x->sel.proto;
+ memcpy(fl6.daddr.s6_addr32, addrs->saddr.a6, 16);
+ memcpy(fl6.saddr.s6_addr32, addrs->daddr.a6, 16);
+ pkey = &addrs->saddr.a6;
+ break;
+ default:
+ return;
+ }
break;
case XFRM_DEV_OFFLOAD_OUT:
src = attrs->smac;
dst = attrs->dmac;
- pkey = &attrs->daddr.a4;
+ switch (addrs->family) {
+ case AF_INET:
+ fl4.flowi4_proto = x->sel.proto;
+ fl4.daddr = addrs->daddr.a4;
+ fl4.saddr = addrs->saddr.a4;
+ pkey = &addrs->daddr.a4;
+ break;
+ case AF_INET6:
+ fl6.flowi6_proto = x->sel.proto;
+ memcpy(fl6.daddr.s6_addr32, addrs->daddr.a6, 16);
+ memcpy(fl6.saddr.s6_addr32, addrs->saddr.a6, 16);
+ pkey = &addrs->daddr.a6;
+ break;
+ default:
+ return;
+ }
break;
default:
return;
}

ether_addr_copy(src, addr);
+
+ /* Destination can refer to a routed network, so perform FIB lookup
+ * to resolve nexthop and get its MAC. Neighbour resolution is used as
+ * fallback.
+ */
+ switch (addrs->family) {
+ case AF_INET:
+ rt = ip_route_output_key(dev_net(netdev), &fl4);
+ if (IS_ERR(rt))
+ goto neigh;
+
+ if (rt->rt_type != RTN_UNICAST) {
+ ip_rt_put(rt);
+ goto neigh;
+ }
+ rt_dst_entry = &rt->dst;
+ break;
+ case AF_INET6:
+ rt_dst_entry = ipv6_stub->ipv6_dst_lookup_flow(
+ dev_net(netdev), NULL, &fl6, NULL);
+ if (IS_ERR(rt_dst_entry))
+ goto neigh;
+ break;
+ default:
+ return;
+ }
+
+ n = dst_neigh_lookup(rt_dst_entry, pkey);
+ if (!n) {
+ dst_release(rt_dst_entry);
+ goto neigh;
+ }
+
+ neigh_ha_snapshot(addr, n, netdev);
+ ether_addr_copy(dst, addr);
+ dst_release(rt_dst_entry);
+ neigh_release(n);
+ return;
+
+neigh:
n = neigh_lookup(&arp_tbl, pkey, netdev);
if (!n) {
n = neigh_create(&arp_tbl, pkey, netdev);
@@ -379,9 +456,10 @@ void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry,
attrs->spi = be32_to_cpu(x->id.spi);

/* source , destination ips */
- memcpy(&attrs->saddr, x->props.saddr.a6, sizeof(attrs->saddr));
- memcpy(&attrs->daddr, x->id.daddr.a6, sizeof(attrs->daddr));
- attrs->family = x->props.family;
+ memcpy(&attrs->addrs.saddr, x->props.saddr.a6,
+ sizeof(attrs->addrs.saddr));
+ memcpy(&attrs->addrs.daddr, x->id.daddr.a6, sizeof(attrs->addrs.daddr));
+ attrs->addrs.family = x->props.family;
attrs->type = x->xso.type;
attrs->reqid = x->props.reqid;
attrs->upspec.dport = ntohs(x->sel.dport);
@@ -433,7 +511,8 @@ static int mlx5e_xfrm_validate_state(struct mlx5_core_dev *mdev,
}
if (x->encap) {
if (!(mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_ESPINUDP)) {
- NL_SET_ERR_MSG_MOD(extack, "Encapsulation is not supported");
+ NL_SET_ERR_MSG_MOD(extack,
+ "Encapsulation is not supported");
return -EINVAL;
}

@@ -857,13 +936,13 @@ static int mlx5e_ipsec_netevent_event(struct notifier_block *nb,
xa_for_each_marked(&ipsec->sadb, idx, sa_entry, MLX5E_IPSEC_TUNNEL_SA) {
attrs = &sa_entry->attrs;

- if (attrs->family == AF_INET) {
- if (!neigh_key_eq32(n, &attrs->saddr.a4) &&
- !neigh_key_eq32(n, &attrs->daddr.a4))
+ if (attrs->addrs.family == AF_INET) {
+ if (!neigh_key_eq32(n, &attrs->addrs.saddr.a4) &&
+ !neigh_key_eq32(n, &attrs->addrs.daddr.a4))
continue;
} else {
- if (!neigh_key_eq128(n, &attrs->saddr.a4) &&
- !neigh_key_eq128(n, &attrs->daddr.a4))
+ if (!neigh_key_eq128(n, &attrs->addrs.saddr.a4) &&
+ !neigh_key_eq128(n, &attrs->addrs.daddr.a4))
continue;
}

@@ -1037,7 +1116,7 @@ static void mlx5e_xfrm_update_stats(struct xfrm_state *x)
* by removing always available headers.
*/
headers = sizeof(struct ethhdr);
- if (sa_entry->attrs.family == AF_INET)
+ if (sa_entry->attrs.addrs.family == AF_INET)
headers += sizeof(struct iphdr);
else
headers += sizeof(struct ipv6hdr);
@@ -1118,9 +1197,9 @@ mlx5e_ipsec_build_accel_pol_attrs(struct mlx5e_ipsec_pol_entry *pol_entry,
sel = &x->selector;
memset(attrs, 0, sizeof(*attrs));

- memcpy(&attrs->saddr, sel->saddr.a6, sizeof(attrs->saddr));
- memcpy(&attrs->daddr, sel->daddr.a6, sizeof(attrs->daddr));
- attrs->family = sel->family;
+ memcpy(&attrs->addrs.saddr, sel->saddr.a6, sizeof(attrs->addrs.saddr));
+ memcpy(&attrs->addrs.daddr, sel->daddr.a6, sizeof(attrs->addrs.daddr));
+ attrs->addrs.family = sel->family;
attrs->dir = x->xdo.dir;
attrs->action = x->action;
attrs->type = XFRM_DEV_OFFLOAD_PACKET;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
index 78e78b6f81467..a37c8a117d80f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
@@ -76,11 +76,7 @@ struct mlx5_replay_esn {
u8 trigger : 1;
};

-struct mlx5_accel_esp_xfrm_attrs {
- u32 spi;
- u32 mode;
- struct aes_gcm_keymat aes_gcm;
-
+struct mlx5e_ipsec_addr {
union {
__be32 a4;
__be32 a6[4];
@@ -90,13 +86,19 @@ struct mlx5_accel_esp_xfrm_attrs {
__be32 a4;
__be32 a6[4];
} daddr;
+ u8 family;
+};

+struct mlx5_accel_esp_xfrm_attrs {
+ u32 spi;
+ u32 mode;
+ struct aes_gcm_keymat aes_gcm;
+ struct mlx5e_ipsec_addr addrs;
struct upspec upspec;
u8 dir : 2;
u8 type : 2;
u8 drop : 1;
u8 encap : 1;
- u8 family;
struct mlx5_replay_esn replay_esn;
u32 authsize;
u32 reqid;
@@ -275,18 +277,8 @@ struct mlx5e_ipsec_sa_entry {
};

struct mlx5_accel_pol_xfrm_attrs {
- union {
- __be32 a4;
- __be32 a6[4];
- } saddr;
-
- union {
- __be32 a4;
- __be32 a6[4];
- } daddr;
-
+ struct mlx5e_ipsec_addr addrs;
struct upspec upspec;
- u8 family;
u8 action;
u8 type : 2;
u8 dir : 2;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
index 131eb9b4eba65..831d4b17ad07a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
@@ -1164,9 +1164,12 @@ static void tx_ft_put_policy(struct mlx5e_ipsec *ipsec, u32 prio, int type)
mutex_unlock(&tx->ft.mutex);
}

-static void setup_fte_addr4(struct mlx5_flow_spec *spec, __be32 *saddr,
- __be32 *daddr)
+static void setup_fte_addr4(struct mlx5_flow_spec *spec,
+ struct mlx5e_ipsec_addr *addrs)
{
+ __be32 *saddr = &addrs->saddr.a4;
+ __be32 *daddr = &addrs->daddr.a4;
+
if (!*saddr && !*daddr)
return;

@@ -1190,9 +1193,12 @@ static void setup_fte_addr4(struct mlx5_flow_spec *spec, __be32 *saddr,
}
}

-static void setup_fte_addr6(struct mlx5_flow_spec *spec, __be32 *saddr,
- __be32 *daddr)
+static void setup_fte_addr6(struct mlx5_flow_spec *spec,
+ struct mlx5e_ipsec_addr *addrs)
{
+ __be32 *saddr = addrs->saddr.a6;
+ __be32 *daddr = addrs->daddr.a6;
+
if (addr6_all_zero(saddr) && addr6_all_zero(daddr))
return;

@@ -1401,7 +1407,7 @@ setup_pkt_tunnel_reformat(struct mlx5_core_dev *mdev,
if (attrs->dir == XFRM_DEV_OFFLOAD_OUT) {
bfflen += sizeof(*esp_hdr) + 8;

- switch (attrs->family) {
+ switch (attrs->addrs.family) {
case AF_INET:
bfflen += sizeof(*iphdr);
break;
@@ -1418,7 +1424,7 @@ setup_pkt_tunnel_reformat(struct mlx5_core_dev *mdev,
return -ENOMEM;

eth_hdr = (struct ethhdr *)reformatbf;
- switch (attrs->family) {
+ switch (attrs->addrs.family) {
case AF_INET:
eth_hdr->h_proto = htons(ETH_P_IP);
break;
@@ -1441,11 +1447,11 @@ setup_pkt_tunnel_reformat(struct mlx5_core_dev *mdev,
reformat_params->param_0 = attrs->authsize;

hdr = reformatbf + sizeof(*eth_hdr);
- switch (attrs->family) {
+ switch (attrs->addrs.family) {
case AF_INET:
iphdr = (struct iphdr *)hdr;
- memcpy(&iphdr->saddr, &attrs->saddr.a4, 4);
- memcpy(&iphdr->daddr, &attrs->daddr.a4, 4);
+ memcpy(&iphdr->saddr, &attrs->addrs.saddr.a4, 4);
+ memcpy(&iphdr->daddr, &attrs->addrs.daddr.a4, 4);
iphdr->version = 4;
iphdr->ihl = 5;
iphdr->ttl = IPSEC_TUNNEL_DEFAULT_TTL;
@@ -1454,8 +1460,8 @@ setup_pkt_tunnel_reformat(struct mlx5_core_dev *mdev,
break;
case AF_INET6:
ipv6hdr = (struct ipv6hdr *)hdr;
- memcpy(&ipv6hdr->saddr, &attrs->saddr.a6, 16);
- memcpy(&ipv6hdr->daddr, &attrs->daddr.a6, 16);
+ memcpy(&ipv6hdr->saddr, &attrs->addrs.saddr.a6, 16);
+ memcpy(&ipv6hdr->daddr, &attrs->addrs.daddr.a6, 16);
ipv6hdr->nexthdr = IPPROTO_ESP;
ipv6hdr->version = 6;
ipv6hdr->hop_limit = IPSEC_TUNNEL_DEFAULT_TTL;
@@ -1489,7 +1495,7 @@ static int get_reformat_type(struct mlx5_accel_esp_xfrm_attrs *attrs)
return MLX5_REFORMAT_TYPE_DEL_ESP_TRANSPORT_OVER_UDP;
return MLX5_REFORMAT_TYPE_DEL_ESP_TRANSPORT;
case XFRM_DEV_OFFLOAD_OUT:
- if (attrs->family == AF_INET) {
+ if (attrs->addrs.family == AF_INET) {
if (attrs->encap)
return MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_UDPV4;
return MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV4;
@@ -1603,7 +1609,7 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
struct mlx5_fc *counter;
int err = 0;

- rx = rx_ft_get(mdev, ipsec, attrs->family, attrs->type);
+ rx = rx_ft_get(mdev, ipsec, attrs->addrs.family, attrs->type);
if (IS_ERR(rx))
return PTR_ERR(rx);

@@ -1613,10 +1619,10 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
goto err_alloc;
}

- if (attrs->family == AF_INET)
- setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4);
+ if (attrs->addrs.family == AF_INET)
+ setup_fte_addr4(spec, &attrs->addrs);
else
- setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
+ setup_fte_addr6(spec, &attrs->addrs);

setup_fte_spi(spec, attrs->spi, attrs->encap);
if (!attrs->encap)
@@ -1705,7 +1711,7 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
err_mod_header:
kvfree(spec);
err_alloc:
- rx_ft_put(ipsec, attrs->family, attrs->type);
+ rx_ft_put(ipsec, attrs->addrs.family, attrs->type);
return err;
}

@@ -1737,10 +1743,10 @@ static int tx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry)

switch (attrs->type) {
case XFRM_DEV_OFFLOAD_CRYPTO:
- if (attrs->family == AF_INET)
- setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4);
+ if (attrs->addrs.family == AF_INET)
+ setup_fte_addr4(spec, &attrs->addrs);
else
- setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
+ setup_fte_addr6(spec, &attrs->addrs);
setup_fte_spi(spec, attrs->spi, false);
setup_fte_esp(spec);
setup_fte_reg_a(spec);
@@ -1824,10 +1830,10 @@ static int tx_add_policy(struct mlx5e_ipsec_pol_entry *pol_entry)
}

tx = ipsec_tx(ipsec, attrs->type);
- if (attrs->family == AF_INET)
- setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4);
+ if (attrs->addrs.family == AF_INET)
+ setup_fte_addr4(spec, &attrs->addrs);
else
- setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
+ setup_fte_addr6(spec, &attrs->addrs);

setup_fte_no_frags(spec);
setup_fte_upper_proto_match(spec, &attrs->upspec);
@@ -1897,12 +1903,12 @@ static int rx_add_policy(struct mlx5e_ipsec_pol_entry *pol_entry)
struct mlx5e_ipsec_rx *rx;
int err, dstn = 0;

- ft = rx_ft_get_policy(mdev, pol_entry->ipsec, attrs->family, attrs->prio,
- attrs->type);
+ ft = rx_ft_get_policy(mdev, pol_entry->ipsec, attrs->addrs.family,
+ attrs->prio, attrs->type);
if (IS_ERR(ft))
return PTR_ERR(ft);

- rx = ipsec_rx(pol_entry->ipsec, attrs->family, attrs->type);
+ rx = ipsec_rx(pol_entry->ipsec, attrs->addrs.family, attrs->type);

spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
if (!spec) {
@@ -1910,10 +1916,10 @@ static int rx_add_policy(struct mlx5e_ipsec_pol_entry *pol_entry)
goto err_alloc;
}

- if (attrs->family == AF_INET)
- setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4);
+ if (attrs->addrs.family == AF_INET)
+ setup_fte_addr4(spec, &attrs->addrs);
else
- setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
+ setup_fte_addr6(spec, &attrs->addrs);

setup_fte_no_frags(spec);
setup_fte_upper_proto_match(spec, &attrs->upspec);
@@ -1954,7 +1960,8 @@ static int rx_add_policy(struct mlx5e_ipsec_pol_entry *pol_entry)
err_action:
kvfree(spec);
err_alloc:
- rx_ft_put_policy(pol_entry->ipsec, attrs->family, attrs->prio, attrs->type);
+ rx_ft_put_policy(pol_entry->ipsec, attrs->addrs.family, attrs->prio,
+ attrs->type);
return err;
}

@@ -2214,7 +2221,8 @@ void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
mlx5_fc_destroy(mdev, ipsec_rule->replay.fc);
}
mlx5_esw_ipsec_rx_id_mapping_remove(sa_entry);
- rx_ft_put(sa_entry->ipsec, sa_entry->attrs.family, sa_entry->attrs.type);
+ rx_ft_put(sa_entry->ipsec, sa_entry->attrs.addrs.family,
+ sa_entry->attrs.type);
}

int mlx5e_accel_ipsec_fs_add_pol(struct mlx5e_ipsec_pol_entry *pol_entry)
@@ -2250,7 +2258,8 @@ void mlx5e_accel_ipsec_fs_del_pol(struct mlx5e_ipsec_pol_entry *pol_entry)
mlx5e_ipsec_unblock_tc_offload(pol_entry->ipsec->mdev);

if (pol_entry->attrs.dir == XFRM_DEV_OFFLOAD_IN) {
- rx_ft_put_policy(pol_entry->ipsec, pol_entry->attrs.family,
+ rx_ft_put_policy(pol_entry->ipsec,
+ pol_entry->attrs.addrs.family,
pol_entry->attrs.prio, pol_entry->attrs.type);
return;
}
@@ -2390,7 +2399,7 @@ bool mlx5e_ipsec_fs_tunnel_enabled(struct mlx5e_ipsec_sa_entry *sa_entry)
struct mlx5e_ipsec_rx *rx;
struct mlx5e_ipsec_tx *tx;

- rx = ipsec_rx(sa_entry->ipsec, attrs->family, attrs->type);
+ rx = ipsec_rx(sa_entry->ipsec, attrs->addrs.family, attrs->type);
tx = ipsec_tx(sa_entry->ipsec, attrs->type);
if (sa_entry->attrs.dir == XFRM_DEV_OFFLOAD_OUT)
return tx->allow_tunnel_mode;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
index f4cb3e78d0651..7cead1ba0bfa1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
@@ -3770,6 +3770,8 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,

if (mode == DEVLINK_ESWITCH_MODE_LEGACY)
esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY;
+ if (mlx5_mode == MLX5_ESWITCH_OFFLOADS)
+ esw->dev->priv.flags &= ~MLX5_PRIV_FLAGS_SWITCH_LEGACY;
mlx5_eswitch_disable_locked(esw);
if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) {
if (mlx5_devlink_trap_get_num_active(esw->dev)) {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
index a2fc937d54617..172862a70c70d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
@@ -193,7 +193,9 @@ static int mlx5_sriov_enable(struct pci_dev *pdev, int num_vfs)
err = pci_enable_sriov(pdev, num_vfs);
if (err) {
mlx5_core_warn(dev, "pci_enable_sriov failed : %d\n", err);
+ devl_lock(devlink);
mlx5_device_disable_sriov(dev, num_vfs, true, true);
+ devl_unlock(devlink);
}
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_dbg.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_dbg.c
index 030a5776c9374..a4c19af1775f1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_dbg.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_dbg.c
@@ -1050,8 +1050,8 @@ static int dr_dump_domain_all(struct seq_file *file, struct mlx5dr_domain *dmn)
struct mlx5dr_table *tbl;
int ret;

- mutex_lock(&dmn->dump_info.dbg_mutex);
mlx5dr_domain_lock(dmn);
+ mutex_lock(&dmn->dump_info.dbg_mutex);

ret = dr_dump_domain(file, dmn);
if (ret < 0)
@@ -1064,8 +1064,8 @@ static int dr_dump_domain_all(struct seq_file *file, struct mlx5dr_domain *dmn)
}

unlock_mutex:
- mlx5dr_domain_unlock(dmn);
mutex_unlock(&dmn->dump_info.dbg_mutex);
+ mlx5dr_domain_unlock(dmn);
return ret;
}

diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c b/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c
index 5a932460db581..6b2dbfbeef377 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_ptp.c
@@ -562,7 +562,7 @@ static int sparx5_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
static struct ptp_clock_info sparx5_ptp_clock_info = {
.owner = THIS_MODULE,
.name = "sparx5 ptp",
- .max_adj = 200000,
+ .max_adj = 10000000,
.gettime64 = sparx5_ptp_gettime64,
.settime64 = sparx5_ptp_settime64,
.adjtime = sparx5_ptp_adjtime,
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_qos.h b/drivers/net/ethernet/microchip/sparx5/sparx5_qos.h
index ced35033a6c5d..b1c6c5c6f16ca 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_qos.h
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_qos.h
@@ -35,7 +35,7 @@
#define SPX5_SE_BURST_UNIT 4096

/* Dwrr */
-#define SPX5_DWRR_COST_MAX 63
+#define SPX5_DWRR_COST_MAX 31

struct sparx5_shaper {
u32 mode;
diff --git a/drivers/net/ethernet/mscc/ocelot_net.c b/drivers/net/ethernet/mscc/ocelot_net.c
index 7c9540a717251..7df78004dba91 100644
--- a/drivers/net/ethernet/mscc/ocelot_net.c
+++ b/drivers/net/ethernet/mscc/ocelot_net.c
@@ -551,44 +551,81 @@ static int ocelot_port_stop(struct net_device *dev)
return 0;
}

-static netdev_tx_t ocelot_port_xmit(struct sk_buff *skb, struct net_device *dev)
+static bool ocelot_xmit_timestamp(struct ocelot *ocelot, int port,
+ struct sk_buff *skb, u32 *rew_op)
{
- struct ocelot_port_private *priv = netdev_priv(dev);
- struct ocelot_port *ocelot_port = &priv->port;
- struct ocelot *ocelot = ocelot_port->ocelot;
- int port = priv->port.index;
- u32 rew_op = 0;
-
- if (!static_branch_unlikely(&ocelot_fdma_enabled) &&
- !ocelot_can_inject(ocelot, 0))
- return NETDEV_TX_BUSY;
-
- /* Check if timestamping is needed */
if (ocelot->ptp && (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
struct sk_buff *clone = NULL;

if (ocelot_port_txtstamp_request(ocelot, port, skb, &clone)) {
kfree_skb(skb);
- return NETDEV_TX_OK;
+ return false;
}

if (clone)
OCELOT_SKB_CB(skb)->clone = clone;

- rew_op = ocelot_ptp_rew_op(skb);
+ *rew_op = ocelot_ptp_rew_op(skb);
}

- if (static_branch_unlikely(&ocelot_fdma_enabled)) {
- ocelot_fdma_inject_frame(ocelot, port, rew_op, skb, dev);
- } else {
- ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb);
+ return true;
+}
+
+static netdev_tx_t ocelot_port_xmit_fdma(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ struct ocelot_port_private *priv = netdev_priv(dev);
+ struct ocelot_port *ocelot_port = &priv->port;
+ struct ocelot *ocelot = ocelot_port->ocelot;
+ int port = priv->port.index;
+ u32 rew_op = 0;
+
+ if (!ocelot_xmit_timestamp(ocelot, port, skb, &rew_op))
+ return NETDEV_TX_OK;
+
+ ocelot_fdma_inject_frame(ocelot, port, rew_op, skb, dev);
+
+ return NETDEV_TX_OK;
+}

- consume_skb(skb);
+static netdev_tx_t ocelot_port_xmit_inj(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ struct ocelot_port_private *priv = netdev_priv(dev);
+ struct ocelot_port *ocelot_port = &priv->port;
+ struct ocelot *ocelot = ocelot_port->ocelot;
+ int port = priv->port.index;
+ u32 rew_op = 0;
+
+ ocelot_lock_inj_grp(ocelot, 0);
+
+ if (!ocelot_can_inject(ocelot, 0)) {
+ ocelot_unlock_inj_grp(ocelot, 0);
+ return NETDEV_TX_BUSY;
}

+ if (!ocelot_xmit_timestamp(ocelot, port, skb, &rew_op)) {
+ ocelot_unlock_inj_grp(ocelot, 0);
+ return NETDEV_TX_OK;
+ }
+
+ ocelot_port_inject_frame(ocelot, port, 0, rew_op, skb);
+
+ ocelot_unlock_inj_grp(ocelot, 0);
+
+ consume_skb(skb);
+
return NETDEV_TX_OK;
}

+static netdev_tx_t ocelot_port_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ if (static_branch_unlikely(&ocelot_fdma_enabled))
+ return ocelot_port_xmit_fdma(skb, dev);
+
+ return ocelot_port_xmit_inj(skb, dev);
+}
+
enum ocelot_action_type {
OCELOT_MACT_LEARN,
OCELOT_MACT_FORGET,
diff --git a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
index b7d9657a7af3d..3f2086873a483 100644
--- a/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
+++ b/drivers/net/ethernet/myricom/myri10ge/myri10ge.c
@@ -688,6 +688,9 @@ static int myri10ge_get_firmware_capabilities(struct myri10ge_priv *mgp)

/* probe for IPv6 TSO support */
mgp->features = NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_TSO;
+ cmd.data0 = 0,
+ cmd.data1 = 0,
+ cmd.data2 = 0,
status = myri10ge_send_cmd(mgp, MXGEFW_CMD_GET_MAX_TSO6_HDR_SIZE,
&cmd, 0);
if (status == 0) {
@@ -806,6 +809,7 @@ static int myri10ge_update_mac_address(struct myri10ge_priv *mgp,
| (addr[2] << 8) | addr[3]);

cmd.data1 = ((addr[4] << 8) | (addr[5]));
+ cmd.data2 = 0;

status = myri10ge_send_cmd(mgp, MXGEFW_SET_MAC_ADDRESS, &cmd, 0);
return status;
@@ -817,6 +821,9 @@ static int myri10ge_change_pause(struct myri10ge_priv *mgp, int pause)
int status, ctl;

ctl = pause ? MXGEFW_ENABLE_FLOW_CONTROL : MXGEFW_DISABLE_FLOW_CONTROL;
+ cmd.data0 = 0,
+ cmd.data1 = 0,
+ cmd.data2 = 0,
status = myri10ge_send_cmd(mgp, ctl, &cmd, 0);

if (status) {
@@ -834,6 +841,9 @@ myri10ge_change_promisc(struct myri10ge_priv *mgp, int promisc, int atomic)
int status, ctl;

ctl = promisc ? MXGEFW_ENABLE_PROMISC : MXGEFW_DISABLE_PROMISC;
+ cmd.data0 = 0;
+ cmd.data1 = 0;
+ cmd.data2 = 0;
status = myri10ge_send_cmd(mgp, ctl, &cmd, atomic);
if (status)
netdev_err(mgp->dev, "Failed to set promisc mode\n");
@@ -1946,6 +1956,8 @@ static int myri10ge_allocate_rings(struct myri10ge_slice_state *ss)
/* get ring sizes */
slice = ss - mgp->ss;
cmd.data0 = slice;
+ cmd.data1 = 0;
+ cmd.data2 = 0;
status = myri10ge_send_cmd(mgp, MXGEFW_CMD_GET_SEND_RING_SIZE, &cmd, 0);
tx_ring_size = cmd.data0;
cmd.data0 = slice;
@@ -2238,12 +2250,16 @@ static int myri10ge_get_txrx(struct myri10ge_priv *mgp, int slice)
status = 0;
if (slice == 0 || (mgp->dev->real_num_tx_queues > 1)) {
cmd.data0 = slice;
+ cmd.data1 = 0;
+ cmd.data2 = 0;
status = myri10ge_send_cmd(mgp, MXGEFW_CMD_GET_SEND_OFFSET,
&cmd, 0);
ss->tx.lanai = (struct mcp_kreq_ether_send __iomem *)
(mgp->sram + cmd.data0);
}
cmd.data0 = slice;
+ cmd.data1 = 0;
+ cmd.data2 = 0;
status |= myri10ge_send_cmd(mgp, MXGEFW_CMD_GET_SMALL_RX_OFFSET,
&cmd, 0);
ss->rx_small.lanai = (struct mcp_kreq_ether_recv __iomem *)
@@ -2312,6 +2328,7 @@ static int myri10ge_open(struct net_device *dev)
if (mgp->num_slices > 1) {
cmd.data0 = mgp->num_slices;
cmd.data1 = MXGEFW_SLICE_INTR_MODE_ONE_PER_SLICE;
+ cmd.data2 = 0;
if (mgp->dev->real_num_tx_queues > 1)
cmd.data1 |= MXGEFW_SLICE_ENABLE_MULTIPLE_TX_QUEUES;
status = myri10ge_send_cmd(mgp, MXGEFW_CMD_ENABLE_RSS_QUEUES,
@@ -2414,6 +2431,8 @@ static int myri10ge_open(struct net_device *dev)

/* now give firmware buffers sizes, and MTU */
cmd.data0 = dev->mtu + ETH_HLEN + VLAN_HLEN;
+ cmd.data1 = 0;
+ cmd.data2 = 0;
status = myri10ge_send_cmd(mgp, MXGEFW_CMD_SET_MTU, &cmd, 0);
cmd.data0 = mgp->small_bytes;
status |=
@@ -2472,7 +2491,6 @@ static int myri10ge_open(struct net_device *dev)
static int myri10ge_close(struct net_device *dev)
{
struct myri10ge_priv *mgp = netdev_priv(dev);
- struct myri10ge_cmd cmd;
int status, old_down_cnt;
int i;

@@ -2491,8 +2509,13 @@ static int myri10ge_close(struct net_device *dev)

netif_tx_stop_all_queues(dev);
if (mgp->rebooted == 0) {
+ struct myri10ge_cmd cmd;
+
old_down_cnt = mgp->down_cnt;
mb();
+ cmd.data0 = 0;
+ cmd.data1 = 0;
+ cmd.data2 = 0;
status =
myri10ge_send_cmd(mgp, MXGEFW_CMD_ETHERNET_DOWN, &cmd, 0);
if (status)
@@ -2956,6 +2979,9 @@ static void myri10ge_set_multicast_list(struct net_device *dev)

/* Disable multicast filtering */

+ cmd.data0 = 0;
+ cmd.data1 = 0;
+ cmd.data2 = 0;
err = myri10ge_send_cmd(mgp, MXGEFW_ENABLE_ALLMULTI, &cmd, 1);
if (err != 0) {
netdev_err(dev, "Failed MXGEFW_ENABLE_ALLMULTI, error status: %d\n",
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
index 9b7f78b6cdb1e..a632536bd7f2f 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
@@ -224,9 +224,10 @@ static int ionic_get_link_ksettings(struct net_device *netdev,
/* This means there's no module plugged in */
break;
default:
- dev_info(lif->ionic->dev, "unknown xcvr type pid=%d / 0x%x\n",
- idev->port_info->status.xcvr.pid,
- idev->port_info->status.xcvr.pid);
+ dev_dbg_ratelimited(lif->ionic->dev,
+ "unknown xcvr type pid=%d / 0x%x\n",
+ idev->port_info->status.xcvr.pid,
+ idev->port_info->status.xcvr.pid);
break;
}

diff --git a/drivers/net/ethernet/sun/sunhme.c b/drivers/net/ethernet/sun/sunhme.c
index 50ace461a1af4..89ac15190770d 100644
--- a/drivers/net/ethernet/sun/sunhme.c
+++ b/drivers/net/ethernet/sun/sunhme.c
@@ -2551,6 +2551,9 @@ static int happy_meal_sbus_probe_one(struct platform_device *op, int is_qfe)
goto err_out_clear_quattro;
}

+ /* BIGMAC may have bogus sizes */
+ if ((op->resource[3].end - op->resource[3].start) >= BMAC_REG_SIZE)
+ op->resource[3].end = op->resource[3].start + BMAC_REG_SIZE - 1;
hp->bigmacregs = devm_platform_ioremap_resource(op, 3);
if (IS_ERR(hp->bigmacregs)) {
dev_err(&op->dev, "Cannot map BIGMAC registers.\n");
diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
index 3a13d60a947a8..cd0a196e6ee3d 100644
--- a/drivers/net/ethernet/ti/Kconfig
+++ b/drivers/net/ethernet/ti/Kconfig
@@ -192,6 +192,7 @@ config TI_ICSSG_PRUETH
depends on NET_SWITCHDEV
depends on ARCH_K3 && OF && TI_K3_UDMA_GLUE_LAYER
depends on PTP_1588_CLOCK_OPTIONAL
+ depends on HSR || !HSR
help
Support dual Gigabit Ethernet ports over the ICSSG PRU Subsystem.
This subsystem is available starting with the AM65 platform.
diff --git a/drivers/net/ethernet/ti/cpsw_new.c b/drivers/net/ethernet/ti/cpsw_new.c
index 0eee1a0527b5c..a74caaca94d11 100644
--- a/drivers/net/ethernet/ti/cpsw_new.c
+++ b/drivers/net/ethernet/ti/cpsw_new.c
@@ -1970,7 +1970,7 @@ static int cpsw_probe(struct platform_device *pdev)
/* setup netdevs */
ret = cpsw_create_ports(cpsw);
if (ret)
- goto clean_unregister_netdev;
+ goto clean_cpts;

/* Grab RX and TX IRQs. Note that we also have RX_THRESHOLD and
* MISC IRQs which are always kept disabled with this driver so
@@ -1984,14 +1984,14 @@ static int cpsw_probe(struct platform_device *pdev)
0, dev_name(dev), cpsw);
if (ret < 0) {
dev_err(dev, "error attaching irq (%d)\n", ret);
- goto clean_unregister_netdev;
+ goto clean_cpts;
}

ret = devm_request_irq(dev, cpsw->irqs_table[1], cpsw_tx_interrupt,
0, dev_name(dev), cpsw);
if (ret < 0) {
dev_err(dev, "error attaching irq (%d)\n", ret);
- goto clean_unregister_netdev;
+ goto clean_cpts;
}

if (!cpsw->cpts)
@@ -2001,7 +2001,7 @@ static int cpsw_probe(struct platform_device *pdev)
0, dev_name(&pdev->dev), cpsw);
if (ret < 0) {
dev_err(dev, "error attaching misc irq (%d)\n", ret);
- goto clean_unregister_netdev;
+ goto clean_cpts;
}

/* Enable misc CPTS evnt_pend IRQ */
@@ -2010,7 +2010,7 @@ static int cpsw_probe(struct platform_device *pdev)
skip_cpts:
ret = cpsw_register_notifiers(cpsw);
if (ret)
- goto clean_unregister_netdev;
+ goto clean_cpts;

ret = cpsw_register_devlink(cpsw);
if (ret)
@@ -2032,8 +2032,6 @@ static int cpsw_probe(struct platform_device *pdev)

clean_unregister_notifiers:
cpsw_unregister_notifiers(cpsw);
-clean_unregister_netdev:
- cpsw_unregister_ports(cpsw);
clean_cpts:
cpts_release(cpsw->cpts);
cpdma_ctlr_destroy(cpsw->dma);
diff --git a/drivers/net/ethernet/xscale/ixp4xx_eth.c b/drivers/net/ethernet/xscale/ixp4xx_eth.c
index aef316278eb45..78e13fb394933 100644
--- a/drivers/net/ethernet/xscale/ixp4xx_eth.c
+++ b/drivers/net/ethernet/xscale/ixp4xx_eth.c
@@ -394,28 +394,29 @@ static void ixp_tx_timestamp(struct port *port, struct sk_buff *skb)
__raw_writel(TX_SNAPSHOT_LOCKED, &regs->channel[ch].ch_event);
}

-static int hwtstamp_set(struct net_device *netdev, struct ifreq *ifr)
+static int ixp4xx_hwtstamp_set(struct net_device *netdev,
+ struct kernel_hwtstamp_config *cfg,
+ struct netlink_ext_ack *extack)
{
- struct hwtstamp_config cfg;
struct ixp46x_ts_regs *regs;
struct port *port = netdev_priv(netdev);
int ret;
int ch;

- if (copy_from_user(&cfg, ifr->ifr_data, sizeof(cfg)))
- return -EFAULT;
+ if (!netif_running(netdev))
+ return -EINVAL;

ret = ixp46x_ptp_find(&port->timesync_regs, &port->phc_index);
if (ret)
- return ret;
+ return -EOPNOTSUPP;

ch = PORT2CHANNEL(port);
regs = port->timesync_regs;

- if (cfg.tx_type != HWTSTAMP_TX_OFF && cfg.tx_type != HWTSTAMP_TX_ON)
+ if (cfg->tx_type != HWTSTAMP_TX_OFF && cfg->tx_type != HWTSTAMP_TX_ON)
return -ERANGE;

- switch (cfg.rx_filter) {
+ switch (cfg->rx_filter) {
case HWTSTAMP_FILTER_NONE:
port->hwts_rx_en = 0;
break;
@@ -431,39 +432,45 @@ static int hwtstamp_set(struct net_device *netdev, struct ifreq *ifr)
return -ERANGE;
}

- port->hwts_tx_en = cfg.tx_type == HWTSTAMP_TX_ON;
+ port->hwts_tx_en = cfg->tx_type == HWTSTAMP_TX_ON;

/* Clear out any old time stamps. */
__raw_writel(TX_SNAPSHOT_LOCKED | RX_SNAPSHOT_LOCKED,
&regs->channel[ch].ch_event);

- return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
+ return 0;
}

-static int hwtstamp_get(struct net_device *netdev, struct ifreq *ifr)
+static int ixp4xx_hwtstamp_get(struct net_device *netdev,
+ struct kernel_hwtstamp_config *cfg)
{
- struct hwtstamp_config cfg;
struct port *port = netdev_priv(netdev);

- cfg.flags = 0;
- cfg.tx_type = port->hwts_tx_en ? HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF;
+ if (!cpu_is_ixp46x())
+ return -EOPNOTSUPP;
+
+ if (!netif_running(netdev))
+ return -EINVAL;
+
+ cfg->flags = 0;
+ cfg->tx_type = port->hwts_tx_en ? HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF;

switch (port->hwts_rx_en) {
case 0:
- cfg.rx_filter = HWTSTAMP_FILTER_NONE;
+ cfg->rx_filter = HWTSTAMP_FILTER_NONE;
break;
case PTP_SLAVE_MODE:
- cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_SYNC;
+ cfg->rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_SYNC;
break;
case PTP_MASTER_MODE:
- cfg.rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ;
+ cfg->rx_filter = HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ;
break;
default:
WARN_ON_ONCE(1);
return -ERANGE;
}

- return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
+ return 0;
}

static int ixp4xx_mdio_cmd(struct mii_bus *bus, int phy_id, int location,
@@ -985,21 +992,6 @@ static void eth_set_mcast_list(struct net_device *dev)
}


-static int eth_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
-{
- if (!netif_running(dev))
- return -EINVAL;
-
- if (cpu_is_ixp46x()) {
- if (cmd == SIOCSHWTSTAMP)
- return hwtstamp_set(dev, req);
- if (cmd == SIOCGHWTSTAMP)
- return hwtstamp_get(dev, req);
- }
-
- return phy_mii_ioctl(dev->phydev, req, cmd);
-}
-
/* ethtool support */

static void ixp4xx_get_drvinfo(struct net_device *dev,
@@ -1433,9 +1425,11 @@ static const struct net_device_ops ixp4xx_netdev_ops = {
.ndo_change_mtu = ixp4xx_eth_change_mtu,
.ndo_start_xmit = eth_xmit,
.ndo_set_rx_mode = eth_set_mcast_list,
- .ndo_eth_ioctl = eth_ioctl,
+ .ndo_eth_ioctl = phy_do_ioctl_running,
.ndo_set_mac_address = eth_mac_addr,
.ndo_validate_addr = eth_validate_addr,
+ .ndo_hwtstamp_get = ixp4xx_hwtstamp_get,
+ .ndo_hwtstamp_set = ixp4xx_hwtstamp_set,
};

static struct eth_plat_info *ixp4xx_of_get_platdata(struct device *dev)
diff --git a/drivers/net/ethernet/xscale/ptp_ixp46x.c b/drivers/net/ethernet/xscale/ptp_ixp46x.c
index 94203eb46e6b0..93c64db22a696 100644
--- a/drivers/net/ethernet/xscale/ptp_ixp46x.c
+++ b/drivers/net/ethernet/xscale/ptp_ixp46x.c
@@ -232,6 +232,9 @@ static struct ixp_clock ixp_clock;

int ixp46x_ptp_find(struct ixp46x_ts_regs *__iomem *regs, int *phc_index)
{
+ if (!cpu_is_ixp46x())
+ return -ENODEV;
+
*regs = ixp_clock.regs;
*phc_index = ptp_clock_index(ixp_clock.ptp_clock);

diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
index aaf7d755fc8a1..3770bc84a9445 100644
--- a/drivers/net/macvlan.c
+++ b/drivers/net/macvlan.c
@@ -1568,6 +1568,11 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev,
if (create)
macvlan_port_destroy(port->dev);
}
+ /* @dev might have been made visible before an error was detected.
+ * Make sure to observe an RCU grace period before our caller
+ * (rtnl_newlink()) frees it.
+ */
+ synchronize_net();
return err;
}
EXPORT_SYMBOL_GPL(macvlan_common_newlink);
diff --git a/drivers/net/mctp/mctp-i2c.c b/drivers/net/mctp/mctp-i2c.c
index 503a9174321c6..617333343ca00 100644
--- a/drivers/net/mctp/mctp-i2c.c
+++ b/drivers/net/mctp/mctp-i2c.c
@@ -243,6 +243,12 @@ static int mctp_i2c_slave_cb(struct i2c_client *client,
return 0;

switch (event) {
+ case I2C_SLAVE_READ_REQUESTED:
+ case I2C_SLAVE_READ_PROCESSED:
+ /* MCTP I2C transport only uses writes */
+ midev->rx_pos = 0;
+ *val = 0xff;
+ break;
case I2C_SLAVE_WRITE_RECEIVED:
if (midev->rx_pos < MCTP_I2C_BUFSZ) {
midev->rx_buffer[midev->rx_pos] = *val;
@@ -280,6 +286,9 @@ static int mctp_i2c_recv(struct mctp_i2c_dev *midev)
size_t recvlen;
int status;

+ if (midev->rx_pos == 0)
+ return 0;
+
/* + 1 for the PEC */
if (midev->rx_pos < MCTP_I2C_MINLEN + 1) {
ndev->stats.rx_length_errors++;
diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
index 6153a35af1070..cae748b762236 100644
--- a/drivers/net/phy/sfp.c
+++ b/drivers/net/phy/sfp.c
@@ -525,9 +525,13 @@ static const struct sfp_quirk sfp_quirks[] = {
SFP_QUIRK("HUAWEI", "MA5671A", sfp_quirk_2500basex,
sfp_fixup_ignore_tx_fault),

- // Lantech 8330-262D-E can operate at 2500base-X, but incorrectly report
- // 2500MBd NRZ in their EEPROM
+ // Lantech 8330-262D-E and 8330-265D can operate at 2500base-X, but
+ // incorrectly report 2500MBd NRZ in their EEPROM.
+ // Some 8330-265D modules have inverted LOS, while all of them report
+ // normal LOS in EEPROM. Therefore we need to ignore LOS entirely.
SFP_QUIRK_S("Lantech", "8330-262D-E", sfp_quirk_2500basex),
+ SFP_QUIRK("Lantech", "8330-265D", sfp_quirk_2500basex,
+ sfp_fixup_ignore_los),

SFP_QUIRK_S("UBNT", "UF-INSTANT", sfp_quirk_ubnt_uf_instant),

diff --git a/drivers/net/usb/Kconfig b/drivers/net/usb/Kconfig
index 3c360d4f06352..73498a3ea44ce 100644
--- a/drivers/net/usb/Kconfig
+++ b/drivers/net/usb/Kconfig
@@ -321,7 +321,6 @@ config USB_NET_DM9601
config USB_NET_SR9700
tristate "CoreChip-sz SR9700 based USB 1.1 10/100 ethernet devices"
depends on USB_USBNET
- select CRC32
help
This option adds support for CoreChip-sz SR9700 based USB 1.1
10/100 Ethernet adapters.
diff --git a/drivers/net/usb/catc.c b/drivers/net/usb/catc.c
index ff439ef535ac9..98346cb4ece01 100644
--- a/drivers/net/usb/catc.c
+++ b/drivers/net/usb/catc.c
@@ -64,6 +64,16 @@ static const char driver_name[] = "catc";
#define CTRL_QUEUE 16 /* Max control requests in flight (power of two) */
#define RX_PKT_SZ 1600 /* Max size of receive packet for F5U011 */

+/*
+ * USB endpoints.
+ */
+
+enum catc_usb_ep {
+ CATC_USB_EP_CONTROL = 0,
+ CATC_USB_EP_BULK = 1,
+ CATC_USB_EP_INT_IN = 2,
+};
+
/*
* Control requests.
*/
@@ -772,6 +782,13 @@ static int catc_probe(struct usb_interface *intf, const struct usb_device_id *id
u8 broadcast[ETH_ALEN];
u8 *macbuf;
int pktsz, ret = -ENOMEM;
+ static const u8 bulk_ep_addr[] = {
+ CATC_USB_EP_BULK | USB_DIR_OUT,
+ CATC_USB_EP_BULK | USB_DIR_IN,
+ 0};
+ static const u8 int_ep_addr[] = {
+ CATC_USB_EP_INT_IN | USB_DIR_IN,
+ 0};

macbuf = kmalloc(ETH_ALEN, GFP_KERNEL);
if (!macbuf)
@@ -784,6 +801,14 @@ static int catc_probe(struct usb_interface *intf, const struct usb_device_id *id
goto fail_mem;
}

+ /* Verify that all required endpoints are present */
+ if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
+ !usb_check_int_endpoints(intf, int_ep_addr)) {
+ dev_err(dev, "Missing or invalid endpoints\n");
+ ret = -ENODEV;
+ goto fail_mem;
+ }
+
netdev = alloc_etherdev(sizeof(struct catc));
if (!netdev)
goto fail_mem;
@@ -828,14 +853,14 @@ static int catc_probe(struct usb_interface *intf, const struct usb_device_id *id
usb_fill_control_urb(catc->ctrl_urb, usbdev, usb_sndctrlpipe(usbdev, 0),
NULL, NULL, 0, catc_ctrl_done, catc);

- usb_fill_bulk_urb(catc->tx_urb, usbdev, usb_sndbulkpipe(usbdev, 1),
- NULL, 0, catc_tx_done, catc);
+ usb_fill_bulk_urb(catc->tx_urb, usbdev, usb_sndbulkpipe(usbdev, CATC_USB_EP_BULK),
+ NULL, 0, catc_tx_done, catc);

- usb_fill_bulk_urb(catc->rx_urb, usbdev, usb_rcvbulkpipe(usbdev, 1),
- catc->rx_buf, pktsz, catc_rx_done, catc);
+ usb_fill_bulk_urb(catc->rx_urb, usbdev, usb_rcvbulkpipe(usbdev, CATC_USB_EP_BULK),
+ catc->rx_buf, pktsz, catc_rx_done, catc);

- usb_fill_int_urb(catc->irq_urb, usbdev, usb_rcvintpipe(usbdev, 2),
- catc->irq_buf, 2, catc_irq_done, catc, 1);
+ usb_fill_int_urb(catc->irq_urb, usbdev, usb_rcvintpipe(usbdev, CATC_USB_EP_INT_IN),
+ catc->irq_buf, 2, catc_irq_done, catc, 1);

if (!catc->is_f5u011) {
u32 *buf;
diff --git a/drivers/net/usb/kaweth.c b/drivers/net/usb/kaweth.c
index c9efb7df892ec..e01d14f6c3667 100644
--- a/drivers/net/usb/kaweth.c
+++ b/drivers/net/usb/kaweth.c
@@ -765,7 +765,6 @@ static void kaweth_set_rx_mode(struct net_device *net)

netdev_dbg(net, "Setting Rx mode to %d\n", packet_filter_bitmap);

- netif_stop_queue(net);

if (net->flags & IFF_PROMISC) {
packet_filter_bitmap |= KAWETH_PACKET_FILTER_PROMISCUOUS;
@@ -775,7 +774,6 @@ static void kaweth_set_rx_mode(struct net_device *net)
}

kaweth->packet_filter_bitmap = packet_filter_bitmap;
- netif_wake_queue(net);
}

/****************************************************************
diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
index 2da67814556f9..4bb53919a1d7e 100644
--- a/drivers/net/usb/lan78xx.c
+++ b/drivers/net/usb/lan78xx.c
@@ -2086,8 +2086,6 @@ static int lan78xx_mdio_init(struct lan78xx_net *dev)
dev->mdiobus->phy_mask = ~(1 << 1);
break;
case ID_REV_CHIP_ID_7801_:
- /* scan thru PHYAD[2..0] */
- dev->mdiobus->phy_mask = ~(0xFF);
break;
}

diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
index c514483134f05..0f16a133c75d1 100644
--- a/drivers/net/usb/pegasus.c
+++ b/drivers/net/usb/pegasus.c
@@ -31,6 +31,17 @@ static const char driver_name[] = "pegasus";
BMSR_100FULL | BMSR_ANEGCAPABLE)
#define CARRIER_CHECK_DELAY (2 * HZ)

+/*
+ * USB endpoints.
+ */
+
+enum pegasus_usb_ep {
+ PEGASUS_USB_EP_CONTROL = 0,
+ PEGASUS_USB_EP_BULK_IN = 1,
+ PEGASUS_USB_EP_BULK_OUT = 2,
+ PEGASUS_USB_EP_INT_IN = 3,
+};
+
static bool loopback;
static bool mii_mode;
static char *devid;
@@ -545,7 +556,7 @@ static void read_bulk_callback(struct urb *urb)
goto tl_sched;
goon:
usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
- usb_rcvbulkpipe(pegasus->usb, 1),
+ usb_rcvbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_IN),
pegasus->rx_skb->data, PEGASUS_MTU,
read_bulk_callback, pegasus);
rx_status = usb_submit_urb(pegasus->rx_urb, GFP_ATOMIC);
@@ -585,7 +596,7 @@ static void rx_fixup(struct tasklet_struct *t)
return;
}
usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
- usb_rcvbulkpipe(pegasus->usb, 1),
+ usb_rcvbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_IN),
pegasus->rx_skb->data, PEGASUS_MTU,
read_bulk_callback, pegasus);
try_again:
@@ -713,7 +724,7 @@ static netdev_tx_t pegasus_start_xmit(struct sk_buff *skb,
((__le16 *) pegasus->tx_buff)[0] = cpu_to_le16(l16);
skb_copy_from_linear_data(skb, pegasus->tx_buff + 2, skb->len);
usb_fill_bulk_urb(pegasus->tx_urb, pegasus->usb,
- usb_sndbulkpipe(pegasus->usb, 2),
+ usb_sndbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_OUT),
pegasus->tx_buff, count,
write_bulk_callback, pegasus);
if ((res = usb_submit_urb(pegasus->tx_urb, GFP_ATOMIC))) {
@@ -840,7 +851,7 @@ static int pegasus_open(struct net_device *net)
set_registers(pegasus, EthID, 6, net->dev_addr);

usb_fill_bulk_urb(pegasus->rx_urb, pegasus->usb,
- usb_rcvbulkpipe(pegasus->usb, 1),
+ usb_rcvbulkpipe(pegasus->usb, PEGASUS_USB_EP_BULK_IN),
pegasus->rx_skb->data, PEGASUS_MTU,
read_bulk_callback, pegasus);
if ((res = usb_submit_urb(pegasus->rx_urb, GFP_KERNEL))) {
@@ -851,7 +862,7 @@ static int pegasus_open(struct net_device *net)
}

usb_fill_int_urb(pegasus->intr_urb, pegasus->usb,
- usb_rcvintpipe(pegasus->usb, 3),
+ usb_rcvintpipe(pegasus->usb, PEGASUS_USB_EP_INT_IN),
pegasus->intr_buff, sizeof(pegasus->intr_buff),
intr_callback, pegasus, pegasus->intr_interval);
if ((res = usb_submit_urb(pegasus->intr_urb, GFP_KERNEL))) {
@@ -1136,10 +1147,24 @@ static int pegasus_probe(struct usb_interface *intf,
pegasus_t *pegasus;
int dev_index = id - pegasus_ids;
int res = -ENOMEM;
+ static const u8 bulk_ep_addr[] = {
+ PEGASUS_USB_EP_BULK_IN | USB_DIR_IN,
+ PEGASUS_USB_EP_BULK_OUT | USB_DIR_OUT,
+ 0};
+ static const u8 int_ep_addr[] = {
+ PEGASUS_USB_EP_INT_IN | USB_DIR_IN,
+ 0};

if (pegasus_blacklisted(dev))
return -ENODEV;

+ /* Verify that all required endpoints are present */
+ if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
+ !usb_check_int_endpoints(intf, int_ep_addr)) {
+ dev_err(&intf->dev, "Missing or invalid endpoints\n");
+ return -ENODEV;
+ }
+
net = alloc_etherdev(sizeof(struct pegasus));
if (!net)
goto out;
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index d27e62939bf13..e4eb4e5425364 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -2447,6 +2447,8 @@ static int r8152_tx_agg_fill(struct r8152 *tp, struct tx_agg *agg)
ret = usb_submit_urb(agg->urb, GFP_ATOMIC);
if (ret < 0)
usb_autopm_put_interface_async(tp->intf);
+ else
+ netif_trans_update(tp->netdev);

out_tx_fill:
return ret;
diff --git a/drivers/net/usb/sr9700.c b/drivers/net/usb/sr9700.c
index 213b4817cfdf6..e4d7bcd0d99c2 100644
--- a/drivers/net/usb/sr9700.c
+++ b/drivers/net/usb/sr9700.c
@@ -18,7 +18,6 @@
#include <linux/ethtool.h>
#include <linux/mii.h>
#include <linux/usb.h>
-#include <linux/crc32.h>
#include <linux/usb/usbnet.h>

#include "sr9700.h"
@@ -265,31 +264,15 @@ static const struct ethtool_ops sr9700_ethtool_ops = {
static void sr9700_set_multicast(struct net_device *netdev)
{
struct usbnet *dev = netdev_priv(netdev);
- /* We use the 20 byte dev->data for our 8 byte filter buffer
- * to avoid allocating memory that is tricky to free later
- */
- u8 *hashes = (u8 *)&dev->data;
/* rx_ctl setting : enable, disable_long, disable_crc */
u8 rx_ctl = RCR_RXEN | RCR_DIS_CRC | RCR_DIS_LONG;

- memset(hashes, 0x00, SR_MCAST_SIZE);
- /* broadcast address */
- hashes[SR_MCAST_SIZE - 1] |= SR_MCAST_ADDR_FLAG;
- if (netdev->flags & IFF_PROMISC) {
+ if (netdev->flags & IFF_PROMISC)
rx_ctl |= RCR_PRMSC;
- } else if (netdev->flags & IFF_ALLMULTI ||
- netdev_mc_count(netdev) > SR_MCAST_MAX) {
- rx_ctl |= RCR_RUNT;
- } else if (!netdev_mc_empty(netdev)) {
- struct netdev_hw_addr *ha;
-
- netdev_for_each_mc_addr(ha, netdev) {
- u32 crc = ether_crc(ETH_ALEN, ha->addr) >> 26;
- hashes[crc >> 3] |= 1 << (crc & 0x7);
- }
- }
+ else if (netdev->flags & IFF_ALLMULTI || !netdev_mc_empty(netdev))
+ /* The chip has no multicast filter */
+ rx_ctl |= RCR_ALL;

- sr_write_async(dev, SR_MAR, SR_MCAST_SIZE, hashes);
sr_write_reg_async(dev, SR_RCR, rx_ctl);
}

diff --git a/drivers/net/usb/sr9700.h b/drivers/net/usb/sr9700.h
index ea2b4de621c86..c479908f7d823 100644
--- a/drivers/net/usb/sr9700.h
+++ b/drivers/net/usb/sr9700.h
@@ -104,9 +104,7 @@
#define WCR_LINKEN (1 << 5)
/* Physical Address Reg */
#define SR_PAR 0x10 /* 0x10 ~ 0x15 6 bytes for PAR */
-/* Multicast Address Reg */
-#define SR_MAR 0x16 /* 0x16 ~ 0x1D 8 bytes for MAR */
-/* 0x1e unused */
+/* 0x16 --> 0x1E unused */
/* Phy Reset Reg */
#define SR_PRR 0x1F
#define PRR_PHY_RST (1 << 0)
@@ -161,9 +159,6 @@
/* parameters */
#define SR_SHARE_TIMEOUT 1000
#define SR_EEPROM_LEN 256
-#define SR_MCAST_SIZE 8
-#define SR_MCAST_ADDR_FLAG 0x80
-#define SR_MCAST_MAX 64
#define SR_TX_OVERHEAD 2 /* 2bytes header */
#define SR_RX_OVERHEAD 7 /* 3bytes header + 4crc tail */

diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c
index 5b01642ca44e0..6b2d1e63855e8 100644
--- a/drivers/net/wan/farsync.c
+++ b/drivers/net/wan/farsync.c
@@ -2550,6 +2550,8 @@ fst_remove_one(struct pci_dev *pdev)

fst_disable_intr(card);
free_irq(card->irq, card);
+ tasklet_kill(&fst_tx_task);
+ tasklet_kill(&fst_int_task);

iounmap(card->ctlmem);
iounmap(card->mem);
diff --git a/drivers/net/wan/fsl_ucc_hdlc.c b/drivers/net/wan/fsl_ucc_hdlc.c
index 605e70f7baace..675f4b0eb502f 100644
--- a/drivers/net/wan/fsl_ucc_hdlc.c
+++ b/drivers/net/wan/fsl_ucc_hdlc.c
@@ -790,18 +790,14 @@ static void uhdlc_memclean(struct ucc_hdlc_private *priv)

if (priv->rx_buffer) {
dma_free_coherent(priv->dev,
- RX_BD_RING_LEN * MAX_RX_BUF_LENGTH,
+ (RX_BD_RING_LEN + TX_BD_RING_LEN) * MAX_RX_BUF_LENGTH,
priv->rx_buffer, priv->dma_rx_addr);
priv->rx_buffer = NULL;
priv->dma_rx_addr = 0;
- }

- if (priv->tx_buffer) {
- dma_free_coherent(priv->dev,
- TX_BD_RING_LEN * MAX_RX_BUF_LENGTH,
- priv->tx_buffer, priv->dma_tx_addr);
priv->tx_buffer = NULL;
priv->dma_tx_addr = 0;
+
}
}

diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
index 6805357ee29e6..2ff2dc4a3f58b 100644
--- a/drivers/net/wireless/ath/ath10k/sdio.c
+++ b/drivers/net/wireless/ath/ath10k/sdio.c
@@ -2486,7 +2486,11 @@ void ath10k_sdio_fw_crashed_dump(struct ath10k *ar)
if (fast_dump)
ath10k_bmi_start(ar);

+ mutex_lock(&ar->dump_mutex);
+
+ spin_lock_bh(&ar->data_lock);
ar->stats.fw_crash_counter++;
+ spin_unlock_bh(&ar->data_lock);

ath10k_sdio_disable_intrs(ar);

@@ -2504,6 +2508,8 @@ void ath10k_sdio_fw_crashed_dump(struct ath10k *ar)

ath10k_sdio_enable_intrs(ar);

+ mutex_unlock(&ar->dump_mutex);
+
ath10k_core_start_recovery(ar);
}

diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
index fef7d9b081963..408f062a4306f 100644
--- a/drivers/net/wireless/ath/ath10k/wmi.c
+++ b/drivers/net/wireless/ath/ath10k/wmi.c
@@ -5289,8 +5289,6 @@ ath10k_wmi_event_peer_sta_ps_state_chg(struct ath10k *ar, struct sk_buff *skb)
struct ath10k_sta *arsta;
u8 peer_addr[ETH_ALEN];

- lockdep_assert_held(&ar->data_lock);
-
ev = (struct wmi_peer_sta_ps_state_chg_event *)skb->data;
ether_addr_copy(peer_addr, ev->peer_macaddr.addr);

@@ -5305,7 +5303,9 @@ ath10k_wmi_event_peer_sta_ps_state_chg(struct ath10k *ar, struct sk_buff *skb)
}

arsta = (struct ath10k_sta *)sta->drv_priv;
+ spin_lock_bh(&ar->data_lock);
arsta->peer_ps_state = __le32_to_cpu(ev->peer_ps_state);
+ spin_unlock_bh(&ar->data_lock);

exit:
rcu_read_unlock();
diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
index 735032c353b2d..c6d17dbbdb376 100644
--- a/drivers/net/wireless/ath/ath11k/core.c
+++ b/drivers/net/wireless/ath/ath11k/core.c
@@ -896,6 +896,34 @@ static const struct dmi_system_id ath11k_pm_quirk_table[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "21F9"),
},
},
+ {
+ .driver_data = (void *)ATH11K_PM_WOW,
+ .matches = { /* Z13 G1 */
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D2"),
+ },
+ },
+ {
+ .driver_data = (void *)ATH11K_PM_WOW,
+ .matches = { /* Z13 G1 */
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D3"),
+ },
+ },
+ {
+ .driver_data = (void *)ATH11K_PM_WOW,
+ .matches = { /* Z16 G1 */
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D4"),
+ },
+ },
+ {
+ .driver_data = (void *)ATH11K_PM_WOW,
+ .matches = { /* Z16 G1 */
+ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "21D5"),
+ },
+ },
{}
};

diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
index d62a2014315a0..49b79648752cf 100644
--- a/drivers/net/wireless/ath/ath11k/reg.c
+++ b/drivers/net/wireless/ath/ath11k/reg.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: BSD-3-Clause-Clear
/*
* Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
- * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ * Copyright (c) Qualcomm Technologies, Inc. and/or its subsidiaries.
*/
#include <linux/rtnetlink.h>

@@ -926,8 +926,11 @@ int ath11k_reg_handle_chan_list(struct ath11k_base *ab,
*/
if (ab->default_regd[pdev_idx] && !ab->new_regd[pdev_idx] &&
!memcmp((char *)ab->default_regd[pdev_idx]->alpha2,
- (char *)reg_info->alpha2, 2))
- goto retfail;
+ (char *)reg_info->alpha2, 2) &&
+ power_type == IEEE80211_REG_UNSET_AP) {
+ ath11k_reg_reset_info(reg_info);
+ return 0;
+ }

/* Intersect new rules with default regd if a new country setting was
* requested, i.e a default regd was already set during initialization
diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
index 5c5fc2b7642f6..761c169c992f1 100644
--- a/drivers/net/wireless/ath/ath12k/wmi.c
+++ b/drivers/net/wireless/ath/ath12k/wmi.c
@@ -4047,7 +4047,7 @@ static int ath12k_wmi_hw_mode_caps(struct ath12k_base *soc,

pref = soc->wmi_ab.preferred_hw_mode;

- if (ath12k_hw_mode_pri_map[mode] < ath12k_hw_mode_pri_map[pref]) {
+ if (ath12k_hw_mode_pri_map[mode] <= ath12k_hw_mode_pri_map[pref]) {
svc_rdy_ext->pref_hw_mode_caps = *hw_mode_caps;
soc->wmi_ab.preferred_hw_mode = mode;
}
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.c b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
index eed9ef17bc293..38d6d5617cce5 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2200.c
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
@@ -11379,7 +11379,13 @@ static const struct pci_device_id card_ids[] = {
{PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2754, 0, 0, 0},
{PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2761, 0, 0, 0},
{PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2762, 0, 0, 0},
- {PCI_VDEVICE(INTEL, 0x104f), 0},
+ /*
+ * This ID conflicts with i40e, but the devices can be differentiated
+ * because i40e devices use PCI_CLASS_NETWORK_ETHERNET and ipw2200
+ * devices use PCI_CLASS_NETWORK_OTHER.
+ */
+ {PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x104f),
+ PCI_CLASS_NETWORK_OTHER << 8, 0xffff00, 0},
{PCI_VDEVICE(INTEL, 0x4220), 0}, /* BG */
{PCI_VDEVICE(INTEL, 0x4221), 0}, /* BG */
{PCI_VDEVICE(INTEL, 0x4223), 0}, /* ABG */
diff --git a/drivers/net/wireless/intel/iwlegacy/3945-mac.c b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
index 74fc76c00ebcf..20d2afe1d55f9 100644
--- a/drivers/net/wireless/intel/iwlegacy/3945-mac.c
+++ b/drivers/net/wireless/intel/iwlegacy/3945-mac.c
@@ -3262,7 +3262,9 @@ il3945_store_measurement(struct device *d, struct device_attribute *attr,

D_INFO("Invoking measurement of type %d on " "channel %d (for '%s')\n",
type, params.channel, buf);
+ mutex_lock(&il->mutex);
il3945_get_measurement(il, &params, type);
+ mutex_unlock(&il->mutex);

return count;
}
diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
index a94cf27ffe4b0..ac2cfef737e4f 100644
--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
@@ -4606,7 +4606,9 @@ il4965_store_tx_power(struct device *d, struct device_attribute *attr,
if (ret)
IL_INFO("%s is not in decimal form.\n", buf);
else {
+ mutex_lock(&il->mutex);
ret = il_set_tx_power(il, val, false);
+ mutex_unlock(&il->mutex);
if (ret)
IL_ERR("failed setting tx power (0x%08x).\n", ret);
else
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
index d013de30e7ed6..28d329ec31b82 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
@@ -1816,6 +1816,20 @@ void iwl_mvm_probe_resp_data_notif(struct iwl_mvm *mvm,

mvmvif = iwl_mvm_vif_from_mac80211(vif);

+ /*
+ * len_low should be 2 + n*13 (where n is the number of descriptors.
+ * 13 is the size of a NoA descriptor). We can have either one or two
+ * descriptors.
+ */
+ if (IWL_FW_CHECK(mvm, notif->noa_active &&
+ notif->noa_attr.len_low != 2 +
+ sizeof(struct ieee80211_p2p_noa_desc) &&
+ notif->noa_attr.len_low != 2 +
+ sizeof(struct ieee80211_p2p_noa_desc) * 2,
+ "Invalid noa_attr.len_low (%d)\n",
+ notif->noa_attr.len_low))
+ return;
+
new_data = kzalloc(sizeof(*new_data), GFP_KERNEL);
if (!new_data)
return;
diff --git a/drivers/net/wireless/marvell/libertas/if_usb.c b/drivers/net/wireless/marvell/libertas/if_usb.c
index 2240b4db8c036..d98c81539ba53 100644
--- a/drivers/net/wireless/marvell/libertas/if_usb.c
+++ b/drivers/net/wireless/marvell/libertas/if_usb.c
@@ -426,6 +426,8 @@ static int usb_tx_block(struct if_usb_card *cardp, uint8_t *payload, uint16_t nb
goto tx_ret;
}

+ usb_kill_urb(cardp->tx_urb);
+
usb_fill_bulk_urb(cardp->tx_urb, cardp->udev,
usb_sndbulkpipe(cardp->udev,
cardp->ep_out),
diff --git a/drivers/net/wireless/realtek/rtw88/main.c b/drivers/net/wireless/realtek/rtw88/main.c
index a4416e8986b97..57fe4ece812c8 100644
--- a/drivers/net/wireless/realtek/rtw88/main.c
+++ b/drivers/net/wireless/realtek/rtw88/main.c
@@ -709,10 +709,10 @@ void rtw_set_rx_freq_band(struct rtw_rx_pkt_stat *pkt_stat, u8 channel)
}
EXPORT_SYMBOL(rtw_set_rx_freq_band);

-void rtw_set_dtim_period(struct rtw_dev *rtwdev, int dtim_period)
+void rtw_set_dtim_period(struct rtw_dev *rtwdev, u8 dtim_period)
{
rtw_write32_set(rtwdev, REG_TCR, BIT_TCR_UPDATE_TIMIE);
- rtw_write8(rtwdev, REG_DTIM_COUNTER_ROOT, dtim_period - 1);
+ rtw_write8(rtwdev, REG_DTIM_COUNTER_ROOT, dtim_period ? dtim_period - 1 : 0);
}

void rtw_update_channel(struct rtw_dev *rtwdev, u8 center_channel,
@@ -1629,14 +1629,41 @@ static u16 rtw_get_max_scan_ie_len(struct rtw_dev *rtwdev)
return len;
}

+static struct ieee80211_supported_band *
+rtw_sband_dup(struct rtw_dev *rtwdev,
+ const struct ieee80211_supported_band *sband)
+{
+ struct ieee80211_supported_band *dup;
+
+ dup = devm_kmemdup(rtwdev->dev, sband, sizeof(*sband), GFP_KERNEL);
+ if (!dup)
+ return NULL;
+
+ dup->channels = devm_kmemdup_array(rtwdev->dev, sband->channels,
+ sband->n_channels,
+ sizeof(*sband->channels),
+ GFP_KERNEL);
+ if (!dup->channels)
+ return NULL;
+
+ dup->bitrates = devm_kmemdup_array(rtwdev->dev, sband->bitrates,
+ sband->n_bitrates,
+ sizeof(*sband->bitrates),
+ GFP_KERNEL);
+ if (!dup->bitrates)
+ return NULL;
+
+ return dup;
+}
+
static void rtw_set_supported_band(struct ieee80211_hw *hw,
const struct rtw_chip_info *chip)
{
- struct rtw_dev *rtwdev = hw->priv;
struct ieee80211_supported_band *sband;
+ struct rtw_dev *rtwdev = hw->priv;

if (chip->band & RTW_BAND_2G) {
- sband = kmemdup(&rtw_band_2ghz, sizeof(*sband), GFP_KERNEL);
+ sband = rtw_sband_dup(rtwdev, &rtw_band_2ghz);
if (!sband)
goto err_out;
if (chip->ht_supported)
@@ -1645,7 +1672,7 @@ static void rtw_set_supported_band(struct ieee80211_hw *hw,
}

if (chip->band & RTW_BAND_5G) {
- sband = kmemdup(&rtw_band_5ghz, sizeof(*sband), GFP_KERNEL);
+ sband = rtw_sband_dup(rtwdev, &rtw_band_5ghz);
if (!sband)
goto err_out;
if (chip->ht_supported)
@@ -1661,13 +1688,6 @@ static void rtw_set_supported_band(struct ieee80211_hw *hw,
rtw_err(rtwdev, "failed to set supported band\n");
}

-static void rtw_unset_supported_band(struct ieee80211_hw *hw,
- const struct rtw_chip_info *chip)
-{
- kfree(hw->wiphy->bands[NL80211_BAND_2GHZ]);
- kfree(hw->wiphy->bands[NL80211_BAND_5GHZ]);
-}
-
static void rtw_vif_smps_iter(void *data, u8 *mac,
struct ieee80211_vif *vif)
{
@@ -2285,10 +2305,7 @@ EXPORT_SYMBOL(rtw_register_hw);

void rtw_unregister_hw(struct rtw_dev *rtwdev, struct ieee80211_hw *hw)
{
- const struct rtw_chip_info *chip = rtwdev->chip;
-
ieee80211_unregister_hw(hw);
- rtw_unset_supported_band(hw, chip);
rtw_debugfs_deinit(rtwdev);
}
EXPORT_SYMBOL(rtw_unregister_hw);
diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
index c808bb271e9d0..b70b6e62449cc 100644
--- a/drivers/net/wireless/realtek/rtw88/main.h
+++ b/drivers/net/wireless/realtek/rtw88/main.h
@@ -2169,7 +2169,7 @@ enum nl80211_band rtw_hw_to_nl80211_band(enum rtw_supported_band hw_band)
}

void rtw_set_rx_freq_band(struct rtw_rx_pkt_stat *pkt_stat, u8 channel);
-void rtw_set_dtim_period(struct rtw_dev *rtwdev, int dtim_period);
+void rtw_set_dtim_period(struct rtw_dev *rtwdev, u8 dtim_period);
void rtw_get_channel_params(struct cfg80211_chan_def *chandef,
struct rtw_channel_params *ch_param);
bool check_hw_ready(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 target);
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821cu.c b/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
index a019f4085e738..1f5af09aed99f 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
+++ b/drivers/net/wireless/realtek/rtw88/rtw8821cu.c
@@ -37,6 +37,8 @@ static const struct usb_device_id rtw_8821cu_id_table[] = {
.driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* Edimax */
{ USB_DEVICE_AND_INTERFACE_INFO(0x7392, 0xd811, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* Edimax */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x2c4e, 0x0105, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)&(rtw8821c_hw_spec) }, /* Mercusys */
{},
};
MODULE_DEVICE_TABLE(usb, rtw_8821cu_id_table);
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.c b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
index 4a6c0a9266a09..0dba4b4c9464e 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.c
+++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.c
@@ -1045,7 +1045,8 @@ static int rtw8822b_set_antenna(struct rtw_dev *rtwdev,
hal->antenna_tx = antenna_tx;
hal->antenna_rx = antenna_rx;

- rtw8822b_config_trx_mode(rtwdev, antenna_tx, antenna_rx, false);
+ if (test_bit(RTW_FLAG_POWERON, rtwdev->flags))
+ rtw8822b_config_trx_mode(rtwdev, antenna_tx, antenna_rx, false);

return 0;
}
diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
index ae2c4ff74531a..b62b59bf528e7 100644
--- a/drivers/net/wireless/realtek/rtw89/fw.c
+++ b/drivers/net/wireless/realtek/rtw89/fw.c
@@ -6696,6 +6696,7 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev,
struct cfg80211_scan_request *req = &scan_req->req;
const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
rtwvif_link->chanctx_idx);
+ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
u32 rx_fltr = rtwdev->hal.rx_fltr;
u8 mac_addr[ETH_ALEN];
@@ -6714,6 +6715,8 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev,
if (req->flags & NL80211_SCAN_FLAG_RANDOM_ADDR)
get_random_mask_addr(mac_addr, req->mac_addr,
req->mac_addr_mask);
+ else if (ieee80211_vif_is_mld(vif))
+ ether_addr_copy(mac_addr, vif->addr);
else
ether_addr_copy(mac_addr, rtwvif_link->mac_addr);
rtw89_core_scan_start(rtwdev, rtwvif_link, mac_addr, true);
diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
index b956ea2add919..31465893f866d 100644
--- a/drivers/net/wireless/realtek/rtw89/mac.c
+++ b/drivers/net/wireless/realtek/rtw89/mac.c
@@ -6673,6 +6673,7 @@ const struct rtw89_mac_gen_def rtw89_mac_gen_ax = {
.check_mac_en = rtw89_mac_check_mac_en_ax,
.sys_init = sys_init_ax,
.trx_init = trx_init_ax,
+ .err_imr_ctrl = err_imr_ctrl_ax,
.hci_func_en = rtw89_mac_hci_func_en_ax,
.dmac_func_pre_en = rtw89_mac_dmac_func_pre_en_ax,
.dle_func_en = dle_func_en_ax,
diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
index 7974849f41e25..ad496a4f441aa 100644
--- a/drivers/net/wireless/realtek/rtw89/mac.h
+++ b/drivers/net/wireless/realtek/rtw89/mac.h
@@ -947,6 +947,7 @@ struct rtw89_mac_gen_def {
enum rtw89_mac_hwmod_sel sel);
int (*sys_init)(struct rtw89_dev *rtwdev);
int (*trx_init)(struct rtw89_dev *rtwdev);
+ void (*err_imr_ctrl)(struct rtw89_dev *rtwdev, bool en);
void (*hci_func_en)(struct rtw89_dev *rtwdev);
void (*dmac_func_pre_en)(struct rtw89_dev *rtwdev);
void (*dle_func_en)(struct rtw89_dev *rtwdev, bool enable);
diff --git a/drivers/net/wireless/realtek/rtw89/mac_be.c b/drivers/net/wireless/realtek/rtw89/mac_be.c
index f22eaa83297fb..2896abce94a3c 100644
--- a/drivers/net/wireless/realtek/rtw89/mac_be.c
+++ b/drivers/net/wireless/realtek/rtw89/mac_be.c
@@ -1166,7 +1166,7 @@ static int resp_pktctl_init_be(struct rtw89_dev *rtwdev, u8 mac_idx)

reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_RESP_CSI_RESERVED_PAGE, mac_idx);
rtw89_write32_mask(rtwdev, reg, B_BE_CSI_RESERVED_START_PAGE_MASK, qt_cfg.pktid);
- rtw89_write32_mask(rtwdev, reg, B_BE_CSI_RESERVED_PAGE_NUM_MASK, qt_cfg.pg_num);
+ rtw89_write32_mask(rtwdev, reg, B_BE_CSI_RESERVED_PAGE_NUM_MASK, qt_cfg.pg_num + 1);

return 0;
}
@@ -2575,6 +2575,7 @@ const struct rtw89_mac_gen_def rtw89_mac_gen_be = {
.check_mac_en = rtw89_mac_check_mac_en_be,
.sys_init = sys_init_be,
.trx_init = trx_init_be,
+ .err_imr_ctrl = err_imr_ctrl_be,
.hci_func_en = rtw89_mac_hci_func_en_be,
.dmac_func_pre_en = rtw89_mac_dmac_func_pre_en_be,
.dle_func_en = dle_func_en_be,
diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
index a87e1778a0d41..c3a8a7af06472 100644
--- a/drivers/net/wireless/realtek/rtw89/pci.c
+++ b/drivers/net/wireless/realtek/rtw89/pci.c
@@ -4250,6 +4250,7 @@ static int __maybe_unused rtw89_pci_resume(struct device *dev)
rtw89_write32_clr(rtwdev, R_AX_PCIE_PS_CTRL_V1,
B_AX_SEL_REQ_ENTR_L1);
}
+ rtw89_pci_hci_ldo(rtwdev);
rtw89_pci_l2_hci_ldo(rtwdev);
rtw89_pci_disable_eq(rtwdev);
rtw89_pci_cfg_dac(rtwdev);
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8922a.c b/drivers/net/wireless/realtek/rtw89/rtw8922a.c
index 64a41f24b2adb..6f7ec37a76573 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8922a.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8922a.c
@@ -630,16 +630,30 @@ static int rtw8922a_read_efuse_rf(struct rtw89_dev *rtwdev, u8 *log_map)
static int rtw8922a_read_efuse(struct rtw89_dev *rtwdev, u8 *log_map,
enum rtw89_efuse_block block)
{
+ struct rtw89_efuse *efuse = &rtwdev->efuse;
+ int ret;
+
switch (block) {
case RTW89_EFUSE_BLOCK_HCI_DIG_PCIE_SDIO:
- return rtw8922a_read_efuse_pci_sdio(rtwdev, log_map);
+ ret = rtw8922a_read_efuse_pci_sdio(rtwdev, log_map);
+ break;
case RTW89_EFUSE_BLOCK_HCI_DIG_USB:
- return rtw8922a_read_efuse_usb(rtwdev, log_map);
+ ret = rtw8922a_read_efuse_usb(rtwdev, log_map);
+ break;
case RTW89_EFUSE_BLOCK_RF:
- return rtw8922a_read_efuse_rf(rtwdev, log_map);
+ ret = rtw8922a_read_efuse_rf(rtwdev, log_map);
+ break;
default:
- return 0;
+ ret = 0;
+ break;
+ }
+
+ if (!ret && is_zero_ether_addr(efuse->addr)) {
+ rtw89_info(rtwdev, "efuse mac address is zero, using random mac\n");
+ eth_random_addr(efuse->addr);
}
+
+ return ret;
}

#define THM_TRIM_POSITIVE_MASK BIT(6)
@@ -1690,6 +1704,32 @@ static int rtw8922a_ctrl_rx_path_tmac(struct rtw89_dev *rtwdev,
}

#define DIGITAL_PWR_COMP_REG_NUM 22
+static const u32 rtw8922a_digital_pwr_comp_2g_s0_val[][DIGITAL_PWR_COMP_REG_NUM] = {
+ {0x012C0064, 0x04B00258, 0x00432710, 0x019000A7, 0x06400320,
+ 0x0D05091D, 0x14D50FA0, 0x00000000, 0x01010000, 0x00000101,
+ 0x01010101, 0x02020201, 0x02010000, 0x03030202, 0x00000303,
+ 0x03020101, 0x06060504, 0x01010000, 0x06050403, 0x01000606,
+ 0x05040202, 0x07070706},
+ {0x012C0064, 0x04B00258, 0x00432710, 0x019000A7, 0x06400320,
+ 0x0D05091D, 0x14D50FA0, 0x00000000, 0x01010100, 0x00000101,
+ 0x01000000, 0x01010101, 0x01010000, 0x02020202, 0x00000404,
+ 0x03020101, 0x04040303, 0x02010000, 0x03030303, 0x00000505,
+ 0x03030201, 0x05050303},
+};
+
+static const u32 rtw8922a_digital_pwr_comp_2g_s1_val[][DIGITAL_PWR_COMP_REG_NUM] = {
+ {0x012C0064, 0x04B00258, 0x00432710, 0x019000A7, 0x06400320,
+ 0x0D05091D, 0x14D50FA0, 0x01010000, 0x01010101, 0x00000101,
+ 0x01010100, 0x01010101, 0x01010000, 0x02020202, 0x01000202,
+ 0x02020101, 0x03030202, 0x02010000, 0x05040403, 0x01000606,
+ 0x05040302, 0x07070605},
+ {0x012C0064, 0x04B00258, 0x00432710, 0x019000A7, 0x06400320,
+ 0x0D05091D, 0x14D50FA0, 0x00000000, 0x01010100, 0x00000101,
+ 0x01010000, 0x02020201, 0x02010100, 0x03030202, 0x01000404,
+ 0x04030201, 0x05050404, 0x01010100, 0x04030303, 0x01000505,
+ 0x03030101, 0x05050404},
+};
+
static const u32 rtw8922a_digital_pwr_comp_val[][DIGITAL_PWR_COMP_REG_NUM] = {
{0x012C0096, 0x044C02BC, 0x00322710, 0x015E0096, 0x03C8028A,
0x0BB80708, 0x17701194, 0x02020100, 0x03030303, 0x01000303,
@@ -1704,7 +1744,7 @@ static const u32 rtw8922a_digital_pwr_comp_val[][DIGITAL_PWR_COMP_REG_NUM] = {
};

static void rtw8922a_set_digital_pwr_comp(struct rtw89_dev *rtwdev,
- bool enable, u8 nss,
+ u8 band, u8 nss,
enum rtw89_rf_path path)
{
static const u32 ltpc_t0[2] = {R_BE_LTPC_T0_PATH0, R_BE_LTPC_T0_PATH1};
@@ -1712,14 +1752,25 @@ static void rtw8922a_set_digital_pwr_comp(struct rtw89_dev *rtwdev,
u32 addr, val;
u32 i;

- if (nss == 1)
- digital_pwr_comp = rtw8922a_digital_pwr_comp_val[0];
- else
- digital_pwr_comp = rtw8922a_digital_pwr_comp_val[1];
+ if (nss == 1) {
+ if (band == RTW89_BAND_2G)
+ digital_pwr_comp = path == RF_PATH_A ?
+ rtw8922a_digital_pwr_comp_2g_s0_val[0] :
+ rtw8922a_digital_pwr_comp_2g_s1_val[0];
+ else
+ digital_pwr_comp = rtw8922a_digital_pwr_comp_val[0];
+ } else {
+ if (band == RTW89_BAND_2G)
+ digital_pwr_comp = path == RF_PATH_A ?
+ rtw8922a_digital_pwr_comp_2g_s0_val[1] :
+ rtw8922a_digital_pwr_comp_2g_s1_val[1];
+ else
+ digital_pwr_comp = rtw8922a_digital_pwr_comp_val[1];
+ }

addr = ltpc_t0[path];
for (i = 0; i < DIGITAL_PWR_COMP_REG_NUM; i++, addr += 4) {
- val = enable ? digital_pwr_comp[i] : 0;
+ val = digital_pwr_comp[i];
rtw89_phy_write32(rtwdev, addr, val);
}
}
@@ -1728,7 +1779,7 @@ static void rtw8922a_digital_pwr_comp(struct rtw89_dev *rtwdev,
enum rtw89_phy_idx phy_idx)
{
const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_CHANCTX_0);
- bool enable = chan->band_type != RTW89_BAND_2G;
+ u8 band = chan->band_type;
u8 path;

if (rtwdev->mlo_dbcc_mode == MLO_1_PLUS_1_1RF) {
@@ -1736,10 +1787,10 @@ static void rtw8922a_digital_pwr_comp(struct rtw89_dev *rtwdev,
path = RF_PATH_A;
else
path = RF_PATH_B;
- rtw8922a_set_digital_pwr_comp(rtwdev, enable, 1, path);
+ rtw8922a_set_digital_pwr_comp(rtwdev, band, 1, path);
} else {
- rtw8922a_set_digital_pwr_comp(rtwdev, enable, 2, RF_PATH_A);
- rtw8922a_set_digital_pwr_comp(rtwdev, enable, 2, RF_PATH_B);
+ rtw8922a_set_digital_pwr_comp(rtwdev, band, 2, RF_PATH_A);
+ rtw8922a_set_digital_pwr_comp(rtwdev, band, 2, RF_PATH_B);
}
}

diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c
index 2a303a758e276..43970d0e5b33f 100644
--- a/drivers/net/wireless/realtek/rtw89/ser.c
+++ b/drivers/net/wireless/realtek/rtw89/ser.c
@@ -429,6 +429,14 @@ static void hal_send_m4_event(struct rtw89_ser *ser)
rtw89_mac_set_err_status(rtwdev, MAC_AX_ERR_L1_RCVY_EN);
}

+static void hal_enable_err_imr(struct rtw89_ser *ser)
+{
+ struct rtw89_dev *rtwdev = container_of(ser, struct rtw89_dev, ser);
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+
+ mac->err_imr_ctrl(rtwdev, true);
+}
+
/* state handler */
static void ser_idle_st_hdl(struct rtw89_ser *ser, u8 evt)
{
@@ -545,6 +553,8 @@ static void ser_do_hci_st_hdl(struct rtw89_ser *ser, u8 evt)
break;

case SER_EV_MAC_RESET_DONE:
+ hal_enable_err_imr(ser);
+
ser_state_goto(ser, SER_IDLE_ST);
break;

diff --git a/drivers/net/wireless/realtek/rtw89/wow.c b/drivers/net/wireless/realtek/rtw89/wow.c
index fdb715dc175c1..db68ac988cd24 100644
--- a/drivers/net/wireless/realtek/rtw89/wow.c
+++ b/drivers/net/wireless/realtek/rtw89/wow.c
@@ -759,6 +759,10 @@ static void rtw89_wow_show_wakeup_reason(struct rtw89_dev *rtwdev)

reason = rtw89_read8(rtwdev, wow_reason_reg);
switch (reason) {
+ case RTW89_WOW_RSN_RX_DISASSOC:
+ wakeup.disconnect = true;
+ rtw89_debug(rtwdev, RTW89_DBG_WOW, "WOW: Rx disassoc\n");
+ break;
case RTW89_WOW_RSN_RX_DEAUTH:
wakeup.disconnect = true;
rtw89_debug(rtwdev, RTW89_DBG_WOW, "WOW: Rx deauth\n");
diff --git a/drivers/net/wireless/realtek/rtw89/wow.h b/drivers/net/wireless/realtek/rtw89/wow.h
index f91991e8f2e30..d4b618fa4e3e5 100644
--- a/drivers/net/wireless/realtek/rtw89/wow.h
+++ b/drivers/net/wireless/realtek/rtw89/wow.h
@@ -28,6 +28,7 @@
enum rtw89_wake_reason {
RTW89_WOW_RSN_RX_PTK_REKEY = 0x1,
RTW89_WOW_RSN_RX_GTK_REKEY = 0x2,
+ RTW89_WOW_RSN_RX_DISASSOC = 0x4,
RTW89_WOW_RSN_RX_DEAUTH = 0x8,
RTW89_WOW_RSN_DISCONNECT = 0x10,
RTW89_WOW_RSN_RX_MAGIC_PKT = 0x21,
diff --git a/drivers/net/wwan/mhi_wwan_mbim.c b/drivers/net/wwan/mhi_wwan_mbim.c
index f8bc9a39bfa30..1d7e3ad900c12 100644
--- a/drivers/net/wwan/mhi_wwan_mbim.c
+++ b/drivers/net/wwan/mhi_wwan_mbim.c
@@ -98,7 +98,8 @@ static struct mhi_mbim_link *mhi_mbim_get_link_rcu(struct mhi_mbim_context *mbim
static int mhi_mbim_get_link_mux_id(struct mhi_controller *cntrl)
{
if (strcmp(cntrl->name, "foxconn-dw5934e") == 0 ||
- strcmp(cntrl->name, "foxconn-t99w640") == 0)
+ strcmp(cntrl->name, "foxconn-t99w640") == 0 ||
+ strcmp(cntrl->name, "foxconn-t99w760") == 0)
return WDS_BIND_MUX_DATA_PORT_MUX_ID;

return 0;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index a78a25b872409..61b547aab286a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -735,10 +735,11 @@ static void connect(struct backend_info *be)
*/
requested_num_queues = xenbus_read_unsigned(dev->otherend,
"multi-queue-num-queues", 1);
- if (requested_num_queues > xenvif_max_queues) {
+ if (requested_num_queues > xenvif_max_queues ||
+ requested_num_queues == 0) {
/* buggy or malicious guest */
xenbus_dev_fatal(dev, -EINVAL,
- "guest requested %u queues, exceeding the maximum of %u.",
+ "guest requested %u queues, but valid range is 1 - %u.",
requested_num_queues, xenvif_max_queues);
return;
}
diff --git a/drivers/nfc/nxp-nci/i2c.c b/drivers/nfc/nxp-nci/i2c.c
index 049662ffdf972..6a5ce8ff91f0b 100644
--- a/drivers/nfc/nxp-nci/i2c.c
+++ b/drivers/nfc/nxp-nci/i2c.c
@@ -305,7 +305,7 @@ static int nxp_nci_i2c_probe(struct i2c_client *client)

r = request_threaded_irq(client->irq, NULL,
nxp_nci_i2c_irq_thread_fn,
- IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+ IRQF_ONESHOT,
NXP_NCI_I2C_DRIVER_NAME, phy);
if (r < 0)
nfc_err(&client->dev, "Unable to register IRQ handler\n");
diff --git a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
index f851397b65d6e..0536521fa6ccc 100644
--- a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
+++ b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
@@ -1202,7 +1202,8 @@ static void switchtec_ntb_init_mw(struct switchtec_ntb *sndev)
sndev->mmio_self_ctrl);

sndev->nr_lut_mw = ioread16(&sndev->mmio_self_ctrl->lut_table_entries);
- sndev->nr_lut_mw = rounddown_pow_of_two(sndev->nr_lut_mw);
+ if (sndev->nr_lut_mw)
+ sndev->nr_lut_mw = rounddown_pow_of_two(sndev->nr_lut_mw);

dev_dbg(&sndev->stdev->dev, "MWs: %d direct, %d lut\n",
sndev->nr_direct_mw, sndev->nr_lut_mw);
@@ -1212,7 +1213,8 @@ static void switchtec_ntb_init_mw(struct switchtec_ntb *sndev)

sndev->peer_nr_lut_mw =
ioread16(&sndev->mmio_peer_ctrl->lut_table_entries);
- sndev->peer_nr_lut_mw = rounddown_pow_of_two(sndev->peer_nr_lut_mw);
+ if (sndev->peer_nr_lut_mw)
+ sndev->peer_nr_lut_mw = rounddown_pow_of_two(sndev->peer_nr_lut_mw);

dev_dbg(&sndev->stdev->dev, "Peer MWs: %d direct, %d lut\n",
sndev->peer_nr_direct_mw, sndev->peer_nr_lut_mw);
@@ -1314,6 +1316,12 @@ static void switchtec_ntb_init_shared(struct switchtec_ntb *sndev)
for (i = 0; i < sndev->nr_lut_mw; i++) {
int idx = sndev->nr_direct_mw + i;

+ if (idx >= MAX_MWS) {
+ dev_err(&sndev->stdev->dev,
+ "Total number of MW cannot be bigger than %d", MAX_MWS);
+ break;
+ }
+
sndev->self_shared->mw_sizes[idx] = LUT_SIZE;
}
}
diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
index 4f775c3e218f4..868a2baef1d52 100644
--- a/drivers/ntb/ntb_transport.c
+++ b/drivers/ntb/ntb_transport.c
@@ -1229,9 +1229,9 @@ static int ntb_transport_init_queue(struct ntb_transport_ctx *nt,
qp->tx_max_entry = tx_size / qp->tx_max_frame;

if (nt->debugfs_node_dir) {
- char debugfs_name[4];
+ char debugfs_name[8];

- snprintf(debugfs_name, 4, "qp%d", qp_num);
+ snprintf(debugfs_name, sizeof(debugfs_name), "qp%d", qp_num);
qp->debugfs_dir = debugfs_create_dir(debugfs_name,
nt->debugfs_node_dir);

diff --git a/drivers/nvdimm/nd_virtio.c b/drivers/nvdimm/nd_virtio.c
index f55d60922b87d..e8ac7425c97cd 100644
--- a/drivers/nvdimm/nd_virtio.c
+++ b/drivers/nvdimm/nd_virtio.c
@@ -44,6 +44,8 @@ static int virtio_pmem_flush(struct nd_region *nd_region)
unsigned long flags;
int err, err1;

+ guard(mutex)(&vpmem->flush_lock);
+
/*
* Don't bother to submit the request to the device if the device is
* not activated.
@@ -53,7 +55,6 @@ static int virtio_pmem_flush(struct nd_region *nd_region)
return -EIO;
}

- might_sleep();
req_data = kmalloc(sizeof(*req_data), GFP_KERNEL);
if (!req_data)
return -ENOMEM;
diff --git a/drivers/nvdimm/virtio_pmem.c b/drivers/nvdimm/virtio_pmem.c
index c9b97aeabf854..89d3d4fc3ce89 100644
--- a/drivers/nvdimm/virtio_pmem.c
+++ b/drivers/nvdimm/virtio_pmem.c
@@ -64,6 +64,7 @@ static int virtio_pmem_probe(struct virtio_device *vdev)
goto out_err;
}

+ mutex_init(&vpmem->flush_lock);
vpmem->vdev = vdev;
vdev->priv = vpmem;
err = init_vq(vpmem);
diff --git a/drivers/nvdimm/virtio_pmem.h b/drivers/nvdimm/virtio_pmem.h
index 0dddefe594c46..f72cf17f9518f 100644
--- a/drivers/nvdimm/virtio_pmem.h
+++ b/drivers/nvdimm/virtio_pmem.h
@@ -13,6 +13,7 @@
#include <linux/module.h>
#include <uapi/linux/virtio_pmem.h>
#include <linux/libnvdimm.h>
+#include <linux/mutex.h>
#include <linux/spinlock.h>

struct virtio_pmem_request {
@@ -35,6 +36,9 @@ struct virtio_pmem {
/* Virtio pmem request queue */
struct virtqueue *req_vq;

+ /* Serialize flush requests to the device. */
+ struct mutex flush_lock;
+
/* nvdimm bus registers virtio pmem device */
struct nvdimm_bus *nvdimm_bus;
struct nvdimm_bus_descriptor nd_desc;
diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
index d1869e6de3844..021bf3310fb2a 100644
--- a/drivers/nvmem/core.c
+++ b/drivers/nvmem/core.c
@@ -856,6 +856,7 @@ static int nvmem_add_cells_from_dt(struct nvmem_device *nvmem, struct device_nod
kfree(info.name);
if (ret) {
of_node_put(child);
+ of_node_put(info.np);
return ret;
}
}
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 63b5b435bd3ae..ba223736237e8 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -795,11 +795,13 @@ static void __init of_unittest_property_copy(void)

new = __of_prop_dup(&p1, GFP_KERNEL);
unittest(new && propcmp(&p1, new), "empty property didn't copy correctly\n");
- __of_prop_free(new);
+ if (new)
+ __of_prop_free(new);

new = __of_prop_dup(&p2, GFP_KERNEL);
unittest(new && propcmp(&p2, new), "non-empty property didn't copy correctly\n");
- __of_prop_free(new);
+ if (new)
+ __of_prop_free(new);
#endif
}

diff --git a/drivers/opp/core.c b/drivers/opp/core.c
index 5ac209472c0cf..b5df41ce3afff 100644
--- a/drivers/opp/core.c
+++ b/drivers/opp/core.c
@@ -226,7 +226,7 @@ unsigned int dev_pm_opp_get_level(struct dev_pm_opp *opp)
{
if (IS_ERR_OR_NULL(opp) || !opp->available) {
pr_err("%s: Invalid parameters\n", __func__);
- return 0;
+ return U32_MAX;
}

return opp->level;
diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index d428457d9c432..3e3168204e303 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -771,7 +771,14 @@ static int dw_pcie_iatu_setup(struct dw_pcie_rp *pp)
dev_warn(pci->dev, "Ranges exceed outbound iATU size (%d)\n",
pci->num_ob_windows);

- pp->msg_atu_index = i;
+ if (pp->use_atu_msg) {
+ if (pci->num_ob_windows > ++i) {
+ pp->msg_atu_index = i;
+ } else {
+ dev_err(pci->dev, "Cannot add outbound window for MSG TLP\n");
+ return -ENOMEM;
+ }
+ }

i = 0;
resource_list_for_each_entry(entry, &pp->bridge->dma_ranges) {
diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index 2fca35dd72a76..5d27cd149f512 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -133,7 +133,6 @@

/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */
#define PARF_INT_ALL_LINK_UP BIT(13)
-#define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23)

/* PARF_NO_SNOOP_OVERIDE register fields */
#define WR_NO_SNOOP_OVERIDE_EN BIT(1)
@@ -1721,8 +1720,7 @@ static int qcom_pcie_probe(struct platform_device *pdev)
goto err_host_deinit;
}

- writel_relaxed(PARF_INT_ALL_LINK_UP | PARF_INT_MSI_DEV_0_7,
- pcie->parf + PARF_INT_ALL_MASK);
+ writel_relaxed(PARF_INT_ALL_LINK_UP, pcie->parf + PARF_INT_ALL_MASK);
}

qcom_pcie_icc_opp_update(pcie);
diff --git a/drivers/pci/controller/pcie-mediatek.c b/drivers/pci/controller/pcie-mediatek.c
index 7f7d04c2ea573..c4cc9d76b42a0 100644
--- a/drivers/pci/controller/pcie-mediatek.c
+++ b/drivers/pci/controller/pcie-mediatek.c
@@ -579,8 +579,10 @@ static int mtk_pcie_init_irq_domain(struct mtk_pcie_port *port,

if (IS_ENABLED(CONFIG_PCI_MSI)) {
ret = mtk_pcie_allocate_msi_domains(port);
- if (ret)
+ if (ret) {
+ irq_domain_remove(port->irq_domain);
return ret;
+ }
}

return 0;
diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c
index 43feb6139fa36..8b392a8363bb1 100644
--- a/drivers/pci/endpoint/pci-ep-cfs.c
+++ b/drivers/pci/endpoint/pci-ep-cfs.c
@@ -68,8 +68,8 @@ static int pci_secondary_epc_epf_link(struct config_item *epf_item,
return 0;
}

-static void pci_secondary_epc_epf_unlink(struct config_item *epc_item,
- struct config_item *epf_item)
+static void pci_secondary_epc_epf_unlink(struct config_item *epf_item,
+ struct config_item *epc_item)
{
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
@@ -132,8 +132,8 @@ static int pci_primary_epc_epf_link(struct config_item *epf_item,
return 0;
}

-static void pci_primary_epc_epf_unlink(struct config_item *epc_item,
- struct config_item *epf_item)
+static void pci_primary_epc_epf_unlink(struct config_item *epf_item,
+ struct config_item *epc_item)
{
struct pci_epf_group *epf_group = to_pci_epf_group(epf_item->ci_parent);
struct pci_epc_group *epc_group = to_pci_epc_group(epc_item);
diff --git a/drivers/pci/iov.c b/drivers/pci/iov.c
index 44a33df3c69c2..d595a345a7d47 100644
--- a/drivers/pci/iov.c
+++ b/drivers/pci/iov.c
@@ -447,7 +447,9 @@ static ssize_t sriov_numvfs_store(struct device *dev,

if (num_vfs == 0) {
/* disable VFs */
+ pci_lock_rescan_remove();
ret = pdev->driver->sriov_configure(pdev, 0);
+ pci_unlock_rescan_remove();
goto exit;
}

@@ -459,7 +461,9 @@ static ssize_t sriov_numvfs_store(struct device *dev,
goto exit;
}

+ pci_lock_rescan_remove();
ret = pdev->driver->sriov_configure(pdev, num_vfs);
+ pci_unlock_rescan_remove();
if (ret < 0)
goto exit;

@@ -581,18 +585,15 @@ static int sriov_add_vfs(struct pci_dev *dev, u16 num_vfs)
if (dev->no_vf_scan)
return 0;

- pci_lock_rescan_remove();
for (i = 0; i < num_vfs; i++) {
rc = pci_iov_add_virtfn(dev, i);
if (rc)
goto failed;
}
- pci_unlock_rescan_remove();
return 0;
failed:
while (i--)
pci_iov_remove_virtfn(dev, i);
- pci_unlock_rescan_remove();

return rc;
}
@@ -712,10 +713,8 @@ static void sriov_del_vfs(struct pci_dev *dev)
struct pci_sriov *iov = dev->sriov;
int i;

- pci_lock_rescan_remove();
for (i = 0; i < iov->num_VFs; i++)
pci_iov_remove_virtfn(dev, i);
- pci_unlock_rescan_remove();
}

static void sriov_disable(struct pci_dev *dev)
diff --git a/drivers/pci/msi/msi.c b/drivers/pci/msi/msi.c
index 8b88487886184..54627c3a2dc8c 100644
--- a/drivers/pci/msi/msi.c
+++ b/drivers/pci/msi/msi.c
@@ -739,7 +739,7 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,

ret = msix_setup_interrupts(dev, entries, nvec, affd);
if (ret)
- goto out_disable;
+ goto out_unmap;

/* Disable INTX */
pci_intx_for_msi(dev, 0);
@@ -760,6 +760,8 @@ static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries,
pcibios_free_irq(dev);
return 0;

+out_unmap:
+ iounmap(dev->msix_base);
out_disable:
dev->msix_enabled = 0;
pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0);
diff --git a/drivers/pci/p2pdma.c b/drivers/pci/p2pdma.c
index 52e1564eadd0b..ec53b2b0d57fe 100644
--- a/drivers/pci/p2pdma.c
+++ b/drivers/pci/p2pdma.c
@@ -143,6 +143,7 @@ static int p2pmem_alloc_mmap(struct file *filp, struct kobject *kobj,
ret = vm_insert_page(vma, vaddr, virt_to_page(kaddr));
if (ret) {
gen_pool_free(p2pdma->pool, (uintptr_t)kaddr, len);
+ percpu_ref_put(ref);
return ret;
}
percpu_ref_get(ref);
diff --git a/drivers/pci/pci-acpi.c b/drivers/pci/pci-acpi.c
index 0cd8a75e22580..3d02959e222fc 100644
--- a/drivers/pci/pci-acpi.c
+++ b/drivers/pci/pci-acpi.c
@@ -271,21 +271,6 @@ static acpi_status decode_type1_hpx_record(union acpi_object *record,
return AE_OK;
}

-static bool pcie_root_rcb_set(struct pci_dev *dev)
-{
- struct pci_dev *rp = pcie_find_root_port(dev);
- u16 lnkctl;
-
- if (!rp)
- return false;
-
- pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &lnkctl);
- if (lnkctl & PCI_EXP_LNKCTL_RCB)
- return true;
-
- return false;
-}
-
/* _HPX PCI Express Setting Record (Type 2) */
struct hpx_type2 {
u32 revision;
@@ -311,6 +296,7 @@ static void program_hpx_type2(struct pci_dev *dev, struct hpx_type2 *hpx)
{
int pos;
u32 reg32;
+ const struct pci_host_bridge *host;

if (!hpx)
return;
@@ -318,6 +304,15 @@ static void program_hpx_type2(struct pci_dev *dev, struct hpx_type2 *hpx)
if (!pci_is_pcie(dev))
return;

+ host = pci_find_host_bridge(dev->bus);
+
+ /*
+ * Only do the _HPX Type 2 programming if OS owns PCIe native
+ * hotplug but not AER.
+ */
+ if (!host->native_pcie_hotplug || host->native_aer)
+ return;
+
if (hpx->revision > 1) {
pci_warn(dev, "PCIe settings rev %d not supported\n",
hpx->revision);
@@ -325,33 +320,27 @@ static void program_hpx_type2(struct pci_dev *dev, struct hpx_type2 *hpx)
}

/*
- * Don't allow _HPX to change MPS or MRRS settings. We manage
- * those to make sure they're consistent with the rest of the
- * platform.
+ * We only allow _HPX to program DEVCTL bits related to AER, namely
+ * PCI_EXP_DEVCTL_CERE, PCI_EXP_DEVCTL_NFERE, PCI_EXP_DEVCTL_FERE,
+ * and PCI_EXP_DEVCTL_URRE.
+ *
+ * The rest of DEVCTL is managed by the OS to make sure it's
+ * consistent with the rest of the platform.
*/
- hpx->pci_exp_devctl_and |= PCI_EXP_DEVCTL_PAYLOAD |
- PCI_EXP_DEVCTL_READRQ;
- hpx->pci_exp_devctl_or &= ~(PCI_EXP_DEVCTL_PAYLOAD |
- PCI_EXP_DEVCTL_READRQ);
+ hpx->pci_exp_devctl_and |= ~PCI_EXP_AER_FLAGS;
+ hpx->pci_exp_devctl_or &= PCI_EXP_AER_FLAGS;

/* Initialize Device Control Register */
pcie_capability_clear_and_set_word(dev, PCI_EXP_DEVCTL,
~hpx->pci_exp_devctl_and, hpx->pci_exp_devctl_or);

- /* Initialize Link Control Register */
+ /* Log if _HPX attempts to modify Link Control Register */
if (pcie_cap_has_lnkctl(dev)) {
-
- /*
- * If the Root Port supports Read Completion Boundary of
- * 128, set RCB to 128. Otherwise, clear it.
- */
- hpx->pci_exp_lnkctl_and |= PCI_EXP_LNKCTL_RCB;
- hpx->pci_exp_lnkctl_or &= ~PCI_EXP_LNKCTL_RCB;
- if (pcie_root_rcb_set(dev))
- hpx->pci_exp_lnkctl_or |= PCI_EXP_LNKCTL_RCB;
-
- pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL,
- ~hpx->pci_exp_lnkctl_and, hpx->pci_exp_lnkctl_or);
+ if (hpx->pci_exp_lnkctl_and != 0xffff ||
+ hpx->pci_exp_lnkctl_or != 0)
+ pci_info(dev, "_HPX attempts Link Control setting (AND %#06x OR %#06x)\n",
+ hpx->pci_exp_lnkctl_and,
+ hpx->pci_exp_lnkctl_or);
}

/* Find Advanced Error Reporting Enhanced Capability */
diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
index a00a2ce01045f..9846ab70cff11 100644
--- a/drivers/pci/pci-driver.c
+++ b/drivers/pci/pci-driver.c
@@ -1656,6 +1656,14 @@ static int pci_dma_configure(struct device *dev)
ret = acpi_dma_configure(dev, acpi_get_dma_attr(adev));
}

+ /*
+ * Attempt to enable ACS regardless of capability because some Root
+ * Ports (e.g. those quirked with *_intel_pch_acs_*) do not have
+ * the standard ACS capability but still support ACS via those
+ * quirks.
+ */
+ pci_enable_acs(to_pci_dev(dev));
+
pci_put_host_bridge_device(bridge);

if (!ret && !driver->driver_managed_dma) {
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 963436edea1cb..040c0be2d6e39 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -1072,7 +1072,7 @@ static void pci_std_enable_acs(struct pci_dev *dev, struct pci_acs *caps)
* pci_enable_acs - enable ACS if hardware support it
* @dev: the PCI device
*/
-static void pci_enable_acs(struct pci_dev *dev)
+void pci_enable_acs(struct pci_dev *dev)
{
struct pci_acs caps;
bool enable_acs = false;
@@ -1573,6 +1573,9 @@ static int pci_set_low_power_state(struct pci_dev *dev, pci_power_t state, bool
|| (state == PCI_D2 && !dev->d2_support))
return -EIO;

+ if (dev->current_state == state)
+ return 0;
+
pci_read_config_word(dev, dev->pm_cap + PCI_PM_CTRL, &pmcsr);
if (PCI_POSSIBLE_ERROR(pmcsr)) {
pci_err(dev, "Unable to change power state from %s to %s, device inaccessible\n",
@@ -3715,14 +3718,6 @@ bool pci_acs_path_enabled(struct pci_dev *start,
void pci_acs_init(struct pci_dev *dev)
{
dev->acs_cap = pci_find_ext_capability(dev, PCI_EXT_CAP_ID_ACS);
-
- /*
- * Attempt to enable ACS regardless of capability because some Root
- * Ports (e.g. those quirked with *_intel_pch_acs_*) do not have
- * the standard ACS capability but still support ACS via those
- * quirks.
- */
- pci_enable_acs(dev);
}

/**
@@ -5607,10 +5602,9 @@ static int pci_bus_trylock(struct pci_bus *bus)
/* Do any devices on or below this slot prevent a bus reset? */
static bool pci_slot_resettable(struct pci_slot *slot)
{
- struct pci_dev *dev;
+ struct pci_dev *dev, *bridge = slot->bus->self;

- if (slot->bus->self &&
- (slot->bus->self->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET))
+ if (bridge && (bridge->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET))
return false;

list_for_each_entry(dev, &slot->bus->devices, bus_list) {
@@ -5627,7 +5621,10 @@ static bool pci_slot_resettable(struct pci_slot *slot)
/* Lock devices from the top of the tree down */
static void pci_slot_lock(struct pci_slot *slot)
{
- struct pci_dev *dev;
+ struct pci_dev *dev, *bridge = slot->bus->self;
+
+ if (bridge)
+ pci_dev_lock(bridge);

list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot != slot)
@@ -5642,7 +5639,7 @@ static void pci_slot_lock(struct pci_slot *slot)
/* Unlock devices from the bottom of the tree up */
static void pci_slot_unlock(struct pci_slot *slot)
{
- struct pci_dev *dev;
+ struct pci_dev *dev, *bridge = slot->bus->self;

list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot != slot)
@@ -5652,21 +5649,25 @@ static void pci_slot_unlock(struct pci_slot *slot)
else
pci_dev_unlock(dev);
}
+
+ if (bridge)
+ pci_dev_unlock(bridge);
}

/* Return 1 on successful lock, 0 on contention */
static int pci_slot_trylock(struct pci_slot *slot)
{
- struct pci_dev *dev;
+ struct pci_dev *dev, *bridge = slot->bus->self;
+
+ if (bridge && !pci_dev_trylock(bridge))
+ return 0;

list_for_each_entry(dev, &slot->bus->devices, bus_list) {
if (!dev->slot || dev->slot != slot)
continue;
if (dev->subordinate) {
- if (!pci_bus_trylock(dev->subordinate)) {
- pci_dev_unlock(dev);
+ if (!pci_bus_trylock(dev->subordinate))
goto unlock;
- }
} else if (!pci_dev_trylock(dev))
goto unlock;
}
@@ -5682,6 +5683,9 @@ static int pci_slot_trylock(struct pci_slot *slot)
else
pci_dev_unlock(dev);
}
+
+ if (bridge)
+ pci_dev_unlock(bridge);
return 0;
}

@@ -6883,7 +6887,7 @@ static void of_pci_bus_release_domain_nr(struct device *parent, int domain_nr)
return;

/* Release domain from IDA where it was allocated. */
- if (of_get_pci_domain_nr(parent->of_node) == domain_nr)
+ if (parent && of_get_pci_domain_nr(parent->of_node) == domain_nr)
ida_free(&pci_domain_nr_static_ida, domain_nr);
else
ida_free(&pci_domain_nr_dynamic_ida, domain_nr);
diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
index c951f861a69b2..b9b97ec0f836a 100644
--- a/drivers/pci/pci.h
+++ b/drivers/pci/pci.h
@@ -80,6 +80,13 @@
#define PCIE_MSG_CODE_DEASSERT_INTC 0x26
#define PCIE_MSG_CODE_DEASSERT_INTD 0x27

+#define PCI_BUS_BRIDGE_IO_WINDOW 0
+#define PCI_BUS_BRIDGE_MEM_WINDOW 1
+#define PCI_BUS_BRIDGE_PREF_MEM_WINDOW 2
+
+#define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \
+ PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE)
+
extern const unsigned char pcie_link_speed[];
extern bool pci_early_dump;

@@ -646,6 +653,7 @@ static inline resource_size_t pci_resource_alignment(struct pci_dev *dev,
}

void pci_acs_init(struct pci_dev *dev);
+void pci_enable_acs(struct pci_dev *dev);
#ifdef CONFIG_PCI_QUIRKS
int pci_dev_specific_acs_enabled(struct pci_dev *dev, u16 acs_flags);
int pci_dev_specific_enable_acs(struct pci_dev *dev);
diff --git a/drivers/pci/pcie/aer.c b/drivers/pci/pcie/aer.c
index e5cbea3a4968b..fe7603e7ee8c8 100644
--- a/drivers/pci/pcie/aer.c
+++ b/drivers/pci/pcie/aer.c
@@ -218,9 +218,6 @@ void pcie_ecrc_get_policy(char *str)
}
#endif /* CONFIG_PCIE_ECRC */

-#define PCI_EXP_AER_FLAGS (PCI_EXP_DEVCTL_CERE | PCI_EXP_DEVCTL_NFERE | \
- PCI_EXP_DEVCTL_FERE | PCI_EXP_DEVCTL_URRE)
-
int pcie_aer_is_native(struct pci_dev *dev)
{
struct pci_host_bridge *host = pci_find_host_bridge(dev->bus);
@@ -1389,6 +1386,20 @@ static void aer_disable_irq(struct pci_dev *pdev)
pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_COMMAND, reg32);
}

+static int clear_status_iter(struct pci_dev *dev, void *data)
+{
+ u16 devctl;
+
+ /* Skip if pci_enable_pcie_error_reporting() hasn't been called yet */
+ pcie_capability_read_word(dev, PCI_EXP_DEVCTL, &devctl);
+ if (!(devctl & PCI_EXP_AER_FLAGS))
+ return 0;
+
+ pci_aer_clear_status(dev);
+ pcie_clear_device_status(dev);
+ return 0;
+}
+
/**
* aer_enable_rootport - enable Root Port's interrupts when receiving messages
* @rpc: pointer to a Root Port data structure
@@ -1410,9 +1421,19 @@ static void aer_enable_rootport(struct aer_rpc *rpc)
pcie_capability_clear_word(pdev, PCI_EXP_RTCTL,
SYSTEM_ERROR_INTR_ON_MESG_MASK);

- /* Clear error status */
+ /* Clear error status of this Root Port or RCEC */
pci_read_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, &reg32);
pci_write_config_dword(pdev, aer + PCI_ERR_ROOT_STATUS, reg32);
+
+ /* Clear error status of agents reporting to this Root Port or RCEC */
+ if (reg32 & AER_ERR_STATUS_MASK) {
+ if (pci_pcie_type(pdev) == PCI_EXP_TYPE_RC_EC)
+ pcie_walk_rcec(pdev, clear_status_iter, NULL);
+ else if (pdev->subordinate)
+ pci_walk_bus(pdev->subordinate, clear_status_iter,
+ NULL);
+ }
+
pci_read_config_dword(pdev, aer + PCI_ERR_COR_STATUS, &reg32);
pci_write_config_dword(pdev, aer + PCI_ERR_COR_STATUS, reg32);
pci_read_config_dword(pdev, aer + PCI_ERR_UNCOR_STATUS, &reg32);
diff --git a/drivers/pci/pcie/portdrv.c b/drivers/pci/pcie/portdrv.c
index ec2c768c687f0..75068c5029b01 100644
--- a/drivers/pci/pcie/portdrv.c
+++ b/drivers/pci/pcie/portdrv.c
@@ -555,10 +555,10 @@ static int pcie_port_remove_service(struct device *dev)

pciedev = to_pcie_device(dev);
driver = to_service_driver(dev->driver);
- if (driver && driver->remove) {
+ if (driver && driver->remove)
driver->remove(pciedev);
- put_device(dev);
- }
+
+ put_device(dev);
return 0;
}

diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
index b358b93a02753..9e419f14738a2 100644
--- a/drivers/pci/probe.c
+++ b/drivers/pci/probe.c
@@ -539,9 +539,13 @@ void pci_read_bridge_bases(struct pci_bus *child)
for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++)
child->resource[i] = &dev->resource[PCI_BRIDGE_RESOURCES+i];

- pci_read_bridge_io(child->self, child->resource[0], false);
- pci_read_bridge_mmio(child->self, child->resource[1], false);
- pci_read_bridge_mmio_pref(child->self, child->resource[2], false);
+ pci_read_bridge_io(child->self,
+ child->resource[PCI_BUS_BRIDGE_IO_WINDOW], false);
+ pci_read_bridge_mmio(child->self,
+ child->resource[PCI_BUS_BRIDGE_MEM_WINDOW], false);
+ pci_read_bridge_mmio_pref(child->self,
+ child->resource[PCI_BUS_BRIDGE_PREF_MEM_WINDOW],
+ false);

if (dev->transparent) {
pci_bus_for_each_resource(child->parent, res) {
@@ -2175,7 +2179,8 @@ int pci_configure_extended_tags(struct pci_dev *dev, void *ign)
u16 ctl;
int ret;

- if (!pci_is_pcie(dev))
+ /* PCI_EXP_DEVCTL_EXT_TAG is RsvdP in VFs */
+ if (!pci_is_pcie(dev) || dev->is_virtfn)
return 0;

ret = pcie_capability_read_dword(dev, PCI_EXP_DEVCAP, &cap);
@@ -2300,6 +2305,37 @@ static void pci_configure_serr(struct pci_dev *dev)
}
}

+static void pci_configure_rcb(struct pci_dev *dev)
+{
+ struct pci_dev *rp;
+ u16 rp_lnkctl;
+
+ /*
+ * Per PCIe r7.0, sec 7.5.3.7, RCB is only meaningful in Root Ports
+ * (where it is read-only), Endpoints, and Bridges. It may only be
+ * set for Endpoints and Bridges if it is set in the Root Port. For
+ * Endpoints, it is 'RsvdP' for Virtual Functions.
+ */
+ if (!pci_is_pcie(dev) ||
+ pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT ||
+ pci_pcie_type(dev) == PCI_EXP_TYPE_UPSTREAM ||
+ pci_pcie_type(dev) == PCI_EXP_TYPE_DOWNSTREAM ||
+ pci_pcie_type(dev) == PCI_EXP_TYPE_RC_EC ||
+ dev->is_virtfn)
+ return;
+
+ /* Root Port often not visible to virtualized guests */
+ rp = pcie_find_root_port(dev);
+ if (!rp)
+ return;
+
+ pcie_capability_read_word(rp, PCI_EXP_LNKCTL, &rp_lnkctl);
+ pcie_capability_clear_and_set_word(dev, PCI_EXP_LNKCTL,
+ PCI_EXP_LNKCTL_RCB,
+ (rp_lnkctl & PCI_EXP_LNKCTL_RCB) ?
+ PCI_EXP_LNKCTL_RCB : 0);
+}
+
static void pci_configure_device(struct pci_dev *dev)
{
pci_configure_mps(dev);
@@ -2309,6 +2345,7 @@ static void pci_configure_device(struct pci_dev *dev)
pci_configure_aspm_l1ss(dev);
pci_configure_eetlp_prefix(dev);
pci_configure_serr(dev);
+ pci_configure_rcb(dev);

pci_acpi_program_hp_params(dev);
}
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 18fa918b4e537..3d05ea35c536f 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -3748,6 +3748,14 @@ static void quirk_no_bus_reset(struct pci_dev *dev)
dev->dev_flags |= PCI_DEV_FLAGS_NO_BUS_RESET;
}

+/*
+ * After asserting Secondary Bus Reset to downstream devices via a GB10
+ * Root Port, the link may not retrain correctly.
+ * https://lore.kernel.org/r/20251113084441.2124737-1-Johnny-CC.Chang@xxxxxxxxxxxx
+ */
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NVIDIA, 0x22CE, quirk_no_bus_reset);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_NVIDIA, 0x22D0, quirk_no_bus_reset);
+
/*
* Some NVIDIA GPU devices do not work with bus reset, SBR needs to be
* prevented for those affected devices.
@@ -3791,6 +3799,16 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_CAVIUM, 0xa100, quirk_no_bus_reset);
*/
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TI, 0xb005, quirk_no_bus_reset);

+/*
+ * Reports from users making use of PCI device assignment with ASM1164
+ * controllers indicate an issue with bus reset where the device fails to
+ * retrain. The issue appears more common in configurations with multiple
+ * controllers. The device does indicate PM reset support (NoSoftRst-),
+ * therefore this still leaves a viable reset method.
+ * https://forum.proxmox.com/threads/problems-with-pcie-passthrough-with-two-identical-devices.149003/
+ */
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ASMEDIA, 0x1164, quirk_no_bus_reset);
+
static void quirk_no_pm_reset(struct pci_dev *dev)
{
/*
@@ -5107,6 +5125,10 @@ static const struct pci_dev_acs_enabled {
{ PCI_VENDOR_ID_QCOM, 0x0401, pci_quirk_qcom_rp_acs },
/* QCOM SA8775P root port */
{ PCI_VENDOR_ID_QCOM, 0x0115, pci_quirk_qcom_rp_acs },
+ /* QCOM Hamoa root port */
+ { PCI_VENDOR_ID_QCOM, 0x0111, pci_quirk_qcom_rp_acs },
+ /* QCOM Glymur root port */
+ { PCI_VENDOR_ID_QCOM, 0x0120, pci_quirk_qcom_rp_acs },
/* HXT SD4800 root ports. The ACS design is same as QCOM QDF2xxx */
{ PCI_VENDOR_ID_HXT, 0x0401, pci_quirk_qcom_rp_acs },
/* Intel PCH root ports */
@@ -5581,6 +5603,7 @@ static void quirk_no_ext_tags(struct pci_dev *pdev)
pci_walk_bus(bridge->bus, pci_configure_extended_tags, NULL);
}
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_3WARE, 0x1004, quirk_no_ext_tags);
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_3WARE, 0x1005, quirk_no_ext_tags);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0132, quirk_no_ext_tags);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0140, quirk_no_ext_tags);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_SERVERWORKS, 0x0141, quirk_no_ext_tags);
@@ -6188,6 +6211,10 @@ DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0x2303,
pci_fixup_pericom_acs_store_forward);
DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0x2303,
pci_fixup_pericom_acs_store_forward);
+DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_PERICOM, 0xb404,
+ pci_fixup_pericom_acs_store_forward);
+DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_PERICOM, 0xb404,
+ pci_fixup_pericom_acs_store_forward);

static void nvidia_ion_ahci_fixup(struct pci_dev *pdev)
{
diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
index e6b0c8ec9c1fa..9a20008ad4a1e 100644
--- a/drivers/perf/arm-cmn.c
+++ b/drivers/perf/arm-cmn.c
@@ -210,6 +210,7 @@ enum cmn_model {
enum cmn_part {
PART_CMN600 = 0x434,
PART_CMN650 = 0x436,
+ PART_CMN600AE = 0x438,
PART_CMN700 = 0x43c,
PART_CI700 = 0x43a,
PART_CMN_S3 = 0x43e,
@@ -2273,6 +2274,9 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
reg = readq_relaxed(cfg_region + CMN_CFGM_PERIPH_ID_01);
part = FIELD_GET(CMN_CFGM_PID0_PART_0, reg);
part |= FIELD_GET(CMN_CFGM_PID1_PART_1, reg) << 8;
+ /* 600AE is close enough that it's not really worth more complexity */
+ if (part == PART_CMN600AE)
+ part = PART_CMN600;
if (cmn->part && cmn->part != part)
dev_warn(cmn->dev,
"Firmware binding mismatch: expected part number 0x%x, found 0x%x\n",
@@ -2423,6 +2427,15 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
arm_cmn_init_node_info(cmn, reg & CMN_CHILD_NODE_ADDR, dn);
dn->portid_bits = xp->portid_bits;
dn->deviceid_bits = xp->deviceid_bits;
+ /*
+ * Logical IDs are assigned from 0 per node type, so as
+ * soon as we see one bigger than expected, we can assume
+ * there are more than we can cope with.
+ */
+ if (dn->logid > CMN_MAX_NODES_PER_EVENT) {
+ dev_err(cmn->dev, "Node ID invalid for supported CMN versions: %d\n", dn->logid);
+ return -ENODEV;
+ }

switch (dn->type) {
case CMN_TYPE_DTC:
@@ -2472,7 +2485,7 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
break;
/* Something has gone horribly wrong */
default:
- dev_err(cmn->dev, "invalid device node type: 0x%x\n", dn->type);
+ dev_err(cmn->dev, "Device node type invalid for supported CMN versions: 0x%x\n", dn->type);
return -ENODEV;
}
}
@@ -2500,6 +2513,10 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
cmn->mesh_x = cmn->num_xps;
cmn->mesh_y = cmn->num_xps / cmn->mesh_x;

+ if (max(cmn->mesh_x, cmn->mesh_y) > CMN_MAX_DIMENSION) {
+ dev_err(cmn->dev, "Mesh size invalid for supported CMN versions: %dx%d\n", cmn->mesh_x, cmn->mesh_y);
+ return -ENODEV;
+ }
/* 1x1 config plays havoc with XP event encodings */
if (cmn->num_xps == 1)
dev_warn(cmn->dev, "1x1 config not fully supported, translate XP events manually\n");
diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
index abd23430dc033..2fce871a1882d 100644
--- a/drivers/perf/arm_spe_pmu.c
+++ b/drivers/perf/arm_spe_pmu.c
@@ -102,6 +102,8 @@ struct arm_spe_pmu {
/* Keep track of our dynamic hotplug state */
static enum cpuhp_state arm_spe_pmu_online;

+static void arm_spe_pmu_stop(struct perf_event *event, int flags);
+
enum arm_spe_pmu_buf_fault_action {
SPE_PMU_BUF_FAULT_ACT_SPURIOUS,
SPE_PMU_BUF_FAULT_ACT_FATAL,
@@ -497,8 +499,8 @@ static u64 arm_spe_pmu_next_off(struct perf_output_handle *handle)
return limit;
}

-static void arm_spe_perf_aux_output_begin(struct perf_output_handle *handle,
- struct perf_event *event)
+static int arm_spe_perf_aux_output_begin(struct perf_output_handle *handle,
+ struct perf_event *event)
{
u64 base, limit;
struct arm_spe_pmu_buf *buf;
@@ -506,7 +508,6 @@ static void arm_spe_perf_aux_output_begin(struct perf_output_handle *handle,
/* Start a new aux session */
buf = perf_aux_output_begin(handle, event);
if (!buf) {
- event->hw.state |= PERF_HES_STOPPED;
/*
* We still need to clear the limit pointer, since the
* profiler might only be disabled by virtue of a fault.
@@ -526,6 +527,7 @@ static void arm_spe_perf_aux_output_begin(struct perf_output_handle *handle,

out_write_limit:
write_sysreg_s(limit, SYS_PMBLIMITR_EL1);
+ return (limit & PMBLIMITR_EL1_E) ? 0 : -EIO;
}

static void arm_spe_perf_aux_output_end(struct perf_output_handle *handle)
@@ -665,7 +667,10 @@ static irqreturn_t arm_spe_pmu_irq_handler(int irq, void *dev)
* when we get to it.
*/
if (!(handle->aux_flags & PERF_AUX_FLAG_TRUNCATED)) {
- arm_spe_perf_aux_output_begin(handle, event);
+ if (arm_spe_perf_aux_output_begin(handle, event)) {
+ arm_spe_pmu_stop(event, PERF_EF_UPDATE);
+ break;
+ }
isb();
}
break;
@@ -760,9 +765,10 @@ static void arm_spe_pmu_start(struct perf_event *event, int flags)
struct perf_output_handle *handle = this_cpu_ptr(spe_pmu->handle);

hwc->state = 0;
- arm_spe_perf_aux_output_begin(handle, event);
- if (hwc->state)
+ if (arm_spe_perf_aux_output_begin(handle, event)) {
+ arm_spe_pmu_stop(event, 0);
return;
+ }

reg = arm_spe_event_to_pmsfcr(event);
write_sysreg_s(reg, SYS_PMSFCR_EL1);
diff --git a/drivers/perf/cxl_pmu.c b/drivers/perf/cxl_pmu.c
index 16328569fde93..24244096f5141 100644
--- a/drivers/perf/cxl_pmu.c
+++ b/drivers/perf/cxl_pmu.c
@@ -874,7 +874,7 @@ static int cxl_pmu_probe(struct device *dev)
if (!irq_name)
return -ENOMEM;

- rc = devm_request_irq(dev, irq, cxl_pmu_irq, IRQF_SHARED | IRQF_ONESHOT,
+ rc = devm_request_irq(dev, irq, cxl_pmu_irq, IRQF_SHARED | IRQF_NO_THREAD,
irq_name, info);
if (rc)
return rc;
diff --git a/drivers/phy/cadence/phy-cadence-torrent.c b/drivers/phy/cadence/phy-cadence-torrent.c
index 8bbbbb87bb22e..92feb45fb56e6 100644
--- a/drivers/phy/cadence/phy-cadence-torrent.c
+++ b/drivers/phy/cadence/phy-cadence-torrent.c
@@ -392,6 +392,7 @@ struct cdns_torrent_refclk_driver {
struct clk_hw hw;
struct regmap_field *cmn_fields[REFCLK_OUT_NUM_CMN_CONFIG];
struct clk_init_data clk_data;
+ u8 parent_index;
};

#define to_cdns_torrent_refclk_driver(_hw) \
@@ -3151,11 +3152,29 @@ static const struct cdns_torrent_vals sgmii_qsgmii_xcvr_diag_ln_vals = {
.num_regs = ARRAY_SIZE(sgmii_qsgmii_xcvr_diag_ln_regs),
};

+static void cdns_torrent_refclk_driver_suspend(struct cdns_torrent_phy *cdns_phy)
+{
+ struct clk_hw *hw = cdns_phy->clk_hw_data->hws[CDNS_TORRENT_REFCLK_DRIVER];
+ struct cdns_torrent_refclk_driver *refclk_driver = to_cdns_torrent_refclk_driver(hw);
+
+ refclk_driver->parent_index = cdns_torrent_refclk_driver_get_parent(hw);
+}
+
+static int cdns_torrent_refclk_driver_resume(struct cdns_torrent_phy *cdns_phy)
+{
+ struct clk_hw *hw = cdns_phy->clk_hw_data->hws[CDNS_TORRENT_REFCLK_DRIVER];
+ struct cdns_torrent_refclk_driver *refclk_driver = to_cdns_torrent_refclk_driver(hw);
+
+ return cdns_torrent_refclk_driver_set_parent(hw, refclk_driver->parent_index);
+}
+
static int cdns_torrent_phy_suspend_noirq(struct device *dev)
{
struct cdns_torrent_phy *cdns_phy = dev_get_drvdata(dev);
int i;

+ cdns_torrent_refclk_driver_suspend(cdns_phy);
+
reset_control_assert(cdns_phy->phy_rst);
reset_control_assert(cdns_phy->apb_rst);
for (i = 0; i < cdns_phy->nsubnodes; i++)
@@ -3177,6 +3196,10 @@ static int cdns_torrent_phy_resume_noirq(struct device *dev)
int node = cdns_phy->nsubnodes;
int ret, i;

+ ret = cdns_torrent_refclk_driver_resume(cdns_phy);
+ if (ret)
+ return ret;
+
ret = cdns_torrent_clk(cdns_phy);
if (ret)
return ret;
diff --git a/drivers/phy/freescale/phy-fsl-imx8mq-usb.c b/drivers/phy/freescale/phy-fsl-imx8mq-usb.c
index 043063699e064..41194083e358c 100644
--- a/drivers/phy/freescale/phy-fsl-imx8mq-usb.c
+++ b/drivers/phy/freescale/phy-fsl-imx8mq-usb.c
@@ -411,6 +411,7 @@ static struct platform_driver imx8mq_usb_phy_driver = {
.driver = {
.name = "imx8mq-usb-phy",
.of_match_table = imx8mq_usb_phy_of_match,
+ .suppress_bind_attrs = true,
}
};
module_platform_driver(imx8mq_usb_phy_driver);
diff --git a/drivers/phy/freescale/phy-fsl-imx8qm-hsio.c b/drivers/phy/freescale/phy-fsl-imx8qm-hsio.c
index 977d21d753a59..279b8ac7822df 100644
--- a/drivers/phy/freescale/phy-fsl-imx8qm-hsio.c
+++ b/drivers/phy/freescale/phy-fsl-imx8qm-hsio.c
@@ -251,7 +251,7 @@ static void imx_hsio_configure_clk_pad(struct phy *phy)
struct imx_hsio_lane *lane = phy_get_drvdata(phy);
struct imx_hsio_priv *priv = lane->priv;

- if (strncmp(priv->refclk_pad, "output", 6) == 0) {
+ if (priv->refclk_pad && strncmp(priv->refclk_pad, "output", 6) == 0) {
pll = true;
regmap_update_bits(priv->misc, HSIO_CTRL0,
HSIO_IOB_A_0_TXOE | HSIO_IOB_A_0_M1M0_MASK,
diff --git a/drivers/phy/marvell/phy-mvebu-cp110-utmi.c b/drivers/phy/marvell/phy-mvebu-cp110-utmi.c
index 4922a5f3327d5..30391d0d7d4b4 100644
--- a/drivers/phy/marvell/phy-mvebu-cp110-utmi.c
+++ b/drivers/phy/marvell/phy-mvebu-cp110-utmi.c
@@ -326,7 +326,7 @@ static int mvebu_cp110_utmi_phy_probe(struct platform_device *pdev)
return -ENOMEM;
}

- port->dr_mode = of_usb_get_dr_mode_by_phy(child, -1);
+ port->dr_mode = of_usb_get_dr_mode_by_phy(child, 0);
if ((port->dr_mode != USB_DR_MODE_HOST) &&
(port->dr_mode != USB_DR_MODE_PERIPHERAL)) {
dev_err(&pdev->dev,
diff --git a/drivers/phy/qualcomm/phy-qcom-edp.c b/drivers/phy/qualcomm/phy-qcom-edp.c
index da2b32fb5b451..dddb06a1afe49 100644
--- a/drivers/phy/qualcomm/phy-qcom-edp.c
+++ b/drivers/phy/qualcomm/phy-qcom-edp.c
@@ -110,7 +110,9 @@ struct qcom_edp {

struct phy_configure_opts_dp dp_opts;

- struct clk_bulk_data clks[2];
+ struct clk_bulk_data *clks;
+ int num_clks;
+
struct regulator_bulk_data supplies[2];

bool is_edp;
@@ -196,7 +198,7 @@ static int qcom_edp_phy_init(struct phy *phy)
if (ret)
return ret;

- ret = clk_bulk_prepare_enable(ARRAY_SIZE(edp->clks), edp->clks);
+ ret = clk_bulk_prepare_enable(edp->num_clks, edp->clks);
if (ret)
goto out_disable_supplies;

@@ -860,7 +862,7 @@ static int qcom_edp_phy_exit(struct phy *phy)
{
struct qcom_edp *edp = phy_get_drvdata(phy);

- clk_bulk_disable_unprepare(ARRAY_SIZE(edp->clks), edp->clks);
+ clk_bulk_disable_unprepare(edp->num_clks, edp->clks);
regulator_bulk_disable(ARRAY_SIZE(edp->supplies), edp->supplies);

return 0;
@@ -1067,11 +1069,9 @@ static int qcom_edp_phy_probe(struct platform_device *pdev)
if (IS_ERR(edp->pll))
return PTR_ERR(edp->pll);

- edp->clks[0].id = "aux";
- edp->clks[1].id = "cfg_ahb";
- ret = devm_clk_bulk_get(dev, ARRAY_SIZE(edp->clks), edp->clks);
- if (ret)
- return ret;
+ edp->num_clks = devm_clk_bulk_get_all(dev, &edp->clks);
+ if (edp->num_clks < 0)
+ return dev_err_probe(dev, edp->num_clks, "failed to get clocks\n");

edp->supplies[0].supply = "vdda-phy";
edp->supplies[1].supply = "vdda-pll";
diff --git a/drivers/phy/ti/phy-j721e-wiz.c b/drivers/phy/ti/phy-j721e-wiz.c
index c6e846d385d28..cbcc7bd5dde0a 100644
--- a/drivers/phy/ti/phy-j721e-wiz.c
+++ b/drivers/phy/ti/phy-j721e-wiz.c
@@ -393,6 +393,7 @@ struct wiz {
struct clk *output_clks[WIZ_MAX_OUTPUT_CLOCKS];
struct clk_onecell_data clk_data;
const struct wiz_data *data;
+ int mux_sel_status[WIZ_MUX_NUM_CLOCKS];
};

static int wiz_reset(struct wiz *wiz)
@@ -1655,11 +1656,25 @@ static void wiz_remove(struct platform_device *pdev)
pm_runtime_disable(dev);
}

+static int wiz_suspend_noirq(struct device *dev)
+{
+ struct wiz *wiz = dev_get_drvdata(dev);
+ int i;
+
+ for (i = 0; i < WIZ_MUX_NUM_CLOCKS; i++)
+ regmap_field_read(wiz->mux_sel_field[i], &wiz->mux_sel_status[i]);
+
+ return 0;
+}
+
static int wiz_resume_noirq(struct device *dev)
{
struct device_node *node = dev->of_node;
struct wiz *wiz = dev_get_drvdata(dev);
- int ret;
+ int ret, i;
+
+ for (i = 0; i < WIZ_MUX_NUM_CLOCKS; i++)
+ regmap_field_write(wiz->mux_sel_field[i], wiz->mux_sel_status[i]);

/* Enable supplemental Control override if available */
if (wiz->sup_legacy_clk_override)
@@ -1681,7 +1696,7 @@ static int wiz_resume_noirq(struct device *dev)
return ret;
}

-static DEFINE_NOIRQ_DEV_PM_OPS(wiz_pm_ops, NULL, wiz_resume_noirq);
+static DEFINE_NOIRQ_DEV_PM_OPS(wiz_pm_ops, wiz_suspend_noirq, wiz_resume_noirq);

static struct platform_driver wiz_driver = {
.probe = wiz_probe,
diff --git a/drivers/pinctrl/intel/Kconfig b/drivers/pinctrl/intel/Kconfig
index 14c26c023590e..75e19a062c185 100644
--- a/drivers/pinctrl/intel/Kconfig
+++ b/drivers/pinctrl/intel/Kconfig
@@ -53,7 +53,10 @@ config PINCTRL_ALDERLAKE
select PINCTRL_INTEL
help
This pinctrl driver provides an interface that allows configuring
- of Intel Alder Lake PCH pins and using them as GPIOs.
+ PCH pins of the following platforms and using them as GPIOs:
+ - Alder Lake HX, N, and S
+ - Raptor Lake HX, E, and S
+ - Twin Lake

config PINCTRL_BROXTON
tristate "Intel Broxton pinctrl and GPIO driver"
@@ -137,16 +140,18 @@ config PINCTRL_METEORLAKE
select PINCTRL_INTEL
help
This pinctrl driver provides an interface that allows configuring
- of Intel Meteor Lake pins and using them as GPIOs.
+ SoC pins of the following platforms and using them as GPIOs:
+ - Arrow Lake (all variants)
+ - Meteor Lake (all variants)

config PINCTRL_METEORPOINT
tristate "Intel Meteor Point pinctrl and GPIO driver"
depends on ACPI
select PINCTRL_INTEL
help
- Meteor Point is the PCH of Intel Meteor Lake. This pinctrl driver
- provides an interface that allows configuring of PCH pins and
- using them as GPIOs.
+ This pinctrl driver provides an interface that allows configuring
+ PCH pins of the following platforms and using them as GPIOs:
+ - Arrow Lake HX and S

config PINCTRL_SUNRISEPOINT
tristate "Intel Sunrisepoint pinctrl and GPIO driver"
@@ -161,7 +166,11 @@ config PINCTRL_TIGERLAKE
select PINCTRL_INTEL
help
This pinctrl driver provides an interface that allows configuring
- of Intel Tiger Lake PCH pins and using them as GPIOs.
+ PCH pins of the following platforms and using them as GPIOs:
+ - Alder Lake H, P, PS, and U
+ - Raptor Lake H, P, PS, PX, and U
+ - Rocket Lake S
+ - Tiger Lake (all variants)

source "drivers/pinctrl/intel/Kconfig.tng"
endmenu
diff --git a/drivers/pinctrl/pinctrl-equilibrium.c b/drivers/pinctrl/pinctrl-equilibrium.c
index 3a9a0f059090f..c82491da2cc9f 100644
--- a/drivers/pinctrl/pinctrl-equilibrium.c
+++ b/drivers/pinctrl/pinctrl-equilibrium.c
@@ -841,6 +841,7 @@ static int pinbank_init(struct device_node *np,

bank->pin_base = spec.args[1];
bank->nr_pins = spec.args[2];
+ of_node_put(spec.np);

bank->aval_pinmap = readl(bank->membase + REG_AVAIL);
bank->id = id;
diff --git a/drivers/pinctrl/pinctrl-single.c b/drivers/pinctrl/pinctrl-single.c
index 2218d65a7d842..a2fb549307adb 100644
--- a/drivers/pinctrl/pinctrl-single.c
+++ b/drivers/pinctrl/pinctrl-single.c
@@ -1359,6 +1359,7 @@ static int pcs_add_gpio_func(struct device_node *node, struct pcs_device *pcs)
}
range = devm_kzalloc(pcs->dev, sizeof(*range), GFP_KERNEL);
if (!range) {
+ of_node_put(gpiospec.np);
ret = -ENOMEM;
break;
}
@@ -1368,6 +1369,7 @@ static int pcs_add_gpio_func(struct device_node *node, struct pcs_device *pcs)
mutex_lock(&pcs->mutex);
list_add_tail(&range->node, &pcs->gpiofuncs);
mutex_unlock(&pcs->mutex);
+ of_node_put(gpiospec.np);
}
return ret;
}
diff --git a/drivers/pinctrl/qcom/pinctrl-sm8250-lpass-lpi.c b/drivers/pinctrl/qcom/pinctrl-sm8250-lpass-lpi.c
index 9791d9ba5087c..4e90b29640ff0 100644
--- a/drivers/pinctrl/qcom/pinctrl-sm8250-lpass-lpi.c
+++ b/drivers/pinctrl/qcom/pinctrl-sm8250-lpass-lpi.c
@@ -73,7 +73,7 @@ static const char * const i2s1_ws_groups[] = { "gpio7" };
static const char * const i2s1_data_groups[] = { "gpio8", "gpio9" };
static const char * const wsa_swr_clk_groups[] = { "gpio10" };
static const char * const wsa_swr_data_groups[] = { "gpio11" };
-static const char * const i2s2_data_groups[] = { "gpio12", "gpio12" };
+static const char * const i2s2_data_groups[] = { "gpio12", "gpio13" };

static const struct lpi_pingroup sm8250_groups[] = {
LPI_PINGROUP(0, 0, swr_tx_clk, qua_mi2s_sclk, _, _),
diff --git a/drivers/platform/chrome/cros_ec_lightbar.c b/drivers/platform/chrome/cros_ec_lightbar.c
index 1e69f61115a4d..de64292faa24e 100644
--- a/drivers/platform/chrome/cros_ec_lightbar.c
+++ b/drivers/platform/chrome/cros_ec_lightbar.c
@@ -119,7 +119,7 @@ static int get_lightbar_version(struct cros_ec_dev *ec,
param = (struct ec_params_lightbar *)msg->data;
param->cmd = LIGHTBAR_CMD_VERSION;
msg->outsize = sizeof(param->cmd);
- msg->result = sizeof(resp->version);
+ msg->insize = sizeof(resp->version);
ret = cros_ec_cmd_xfer_status(ec->ec_dev, msg);
if (ret < 0 && ret != -EINVAL) {
ret = 0;
diff --git a/drivers/platform/chrome/cros_typec_switch.c b/drivers/platform/chrome/cros_typec_switch.c
index 07a19386dc4ee..b50e4651c6470 100644
--- a/drivers/platform/chrome/cros_typec_switch.c
+++ b/drivers/platform/chrome/cros_typec_switch.c
@@ -230,20 +230,20 @@ static int cros_typec_register_switches(struct cros_typec_switch_data *sdata)

adev = to_acpi_device_node(fwnode);
if (!adev) {
- dev_err(fwnode->dev, "Couldn't get ACPI device handle\n");
+ dev_err(dev, "Couldn't get ACPI device handle for %pfwP\n", fwnode);
ret = -ENODEV;
goto err_switch;
}

ret = acpi_evaluate_integer(adev->handle, "_ADR", NULL, &index);
if (ACPI_FAILURE(ret)) {
- dev_err(fwnode->dev, "_ADR wasn't evaluated\n");
+ dev_err(dev, "_ADR wasn't evaluated for %pfwP\n", fwnode);
ret = -ENODATA;
goto err_switch;
}

if (index >= EC_USB_PD_MAX_PORTS) {
- dev_err(fwnode->dev, "Invalid port index number: %llu\n", index);
+ dev_err(dev, "%pfwP: Invalid port index number: %llu\n", fwnode, index);
ret = -EINVAL;
goto err_switch;
}
diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
index 8a1e2268d301a..2e352e475b562 100644
--- a/drivers/platform/x86/amd/pmf/core.c
+++ b/drivers/platform/x86/amd/pmf/core.c
@@ -316,6 +316,61 @@ int amd_pmf_init_metrics_table(struct amd_pmf_dev *dev)
return 0;
}

+static int amd_pmf_reinit_ta(struct amd_pmf_dev *pdev)
+{
+ bool status;
+ int ret, i;
+
+ for (i = 0; i < ARRAY_SIZE(amd_pmf_ta_uuid); i++) {
+ ret = amd_pmf_tee_init(pdev, &amd_pmf_ta_uuid[i]);
+ if (ret) {
+ dev_err(pdev->dev, "TEE init failed for UUID[%d] ret: %d\n", i, ret);
+ return ret;
+ }
+
+ ret = amd_pmf_start_policy_engine(pdev);
+ dev_dbg(pdev->dev, "start policy engine ret: %d (UUID idx: %d)\n", ret, i);
+ status = ret == TA_PMF_TYPE_SUCCESS;
+ if (status)
+ break;
+ amd_pmf_tee_deinit(pdev);
+ }
+
+ return 0;
+}
+
+static int amd_pmf_restore_handler(struct device *dev)
+{
+ struct amd_pmf_dev *pdev = dev_get_drvdata(dev);
+ int ret;
+
+ if (pdev->buf) {
+ ret = amd_pmf_set_dram_addr(pdev, false);
+ if (ret)
+ return ret;
+ }
+
+ if (pdev->smart_pc_enabled)
+ amd_pmf_reinit_ta(pdev);
+
+ return 0;
+}
+
+static int amd_pmf_freeze_handler(struct device *dev)
+{
+ struct amd_pmf_dev *pdev = dev_get_drvdata(dev);
+
+ if (!pdev->smart_pc_enabled)
+ return 0;
+
+ cancel_delayed_work_sync(&pdev->pb_work);
+ /* Clear all TEE resources */
+ amd_pmf_tee_deinit(pdev);
+ pdev->session_id = 0;
+
+ return 0;
+}
+
static int amd_pmf_suspend_handler(struct device *dev)
{
struct amd_pmf_dev *pdev = dev_get_drvdata(dev);
@@ -349,7 +404,12 @@ static int amd_pmf_resume_handler(struct device *dev)
return 0;
}

-static DEFINE_SIMPLE_DEV_PM_OPS(amd_pmf_pm, amd_pmf_suspend_handler, amd_pmf_resume_handler);
+static const struct dev_pm_ops amd_pmf_pm = {
+ .suspend = amd_pmf_suspend_handler,
+ .resume = amd_pmf_resume_handler,
+ .freeze = amd_pmf_freeze_handler,
+ .restore = amd_pmf_restore_handler,
+};

static void amd_pmf_init_features(struct amd_pmf_dev *dev)
{
diff --git a/drivers/platform/x86/amd/pmf/pmf.h b/drivers/platform/x86/amd/pmf/pmf.h
index 34ba0309a33a2..b857ef4498903 100644
--- a/drivers/platform/x86/amd/pmf/pmf.h
+++ b/drivers/platform/x86/amd/pmf/pmf.h
@@ -116,6 +116,12 @@ struct cookie_header {

#define APTS_MAX_STATES 16

+static const uuid_t amd_pmf_ta_uuid[] __used = { UUID_INIT(0xd9b39bf2, 0x66bd, 0x4154, 0xaf, 0xb8,
+ 0x8a, 0xcc, 0x2b, 0x2b, 0x60, 0xd6),
+ UUID_INIT(0x6fd93b77, 0x3fb8, 0x524d, 0xb1, 0x2d,
+ 0xc5, 0x29, 0xb1, 0x3d, 0x85, 0x43),
+ };
+
/* APTS PMF BIOS Interface */
struct amd_pmf_apts_output {
u16 table_version;
@@ -802,4 +808,8 @@ void amd_pmf_dump_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *
/* Quirk infrastructure */
void amd_pmf_quirks_init(struct amd_pmf_dev *dev);

+int amd_pmf_tee_init(struct amd_pmf_dev *dev, const uuid_t *uuid);
+void amd_pmf_tee_deinit(struct amd_pmf_dev *dev);
+int amd_pmf_start_policy_engine(struct amd_pmf_dev *dev);
+
#endif /* PMF_H */
diff --git a/drivers/platform/x86/amd/pmf/tee-if.c b/drivers/platform/x86/amd/pmf/tee-if.c
index a9b195ec6f33f..6254423d05b06 100644
--- a/drivers/platform/x86/amd/pmf/tee-if.c
+++ b/drivers/platform/x86/amd/pmf/tee-if.c
@@ -27,12 +27,6 @@ module_param(pb_side_load, bool, 0444);
MODULE_PARM_DESC(pb_side_load, "Sideload policy binaries debug policy failures");
#endif

-static const uuid_t amd_pmf_ta_uuid[] = { UUID_INIT(0xd9b39bf2, 0x66bd, 0x4154, 0xaf, 0xb8, 0x8a,
- 0xcc, 0x2b, 0x2b, 0x60, 0xd6),
- UUID_INIT(0x6fd93b77, 0x3fb8, 0x524d, 0xb1, 0x2d, 0xc5,
- 0x29, 0xb1, 0x3d, 0x85, 0x43),
- };
-
static const char *amd_pmf_uevent_as_str(unsigned int state)
{
switch (state) {
@@ -296,7 +290,7 @@ static void amd_pmf_invoke_cmd(struct work_struct *work)
schedule_delayed_work(&dev->pb_work, msecs_to_jiffies(pb_actions_ms));
}

-static int amd_pmf_start_policy_engine(struct amd_pmf_dev *dev)
+int amd_pmf_start_policy_engine(struct amd_pmf_dev *dev)
{
struct cookie_header *header;
int res;
@@ -454,7 +448,7 @@ static int amd_pmf_register_input_device(struct amd_pmf_dev *dev)
return 0;
}

-static int amd_pmf_tee_init(struct amd_pmf_dev *dev, const uuid_t *uuid)
+int amd_pmf_tee_init(struct amd_pmf_dev *dev, const uuid_t *uuid)
{
u32 size;
int ret;
@@ -502,7 +496,7 @@ static int amd_pmf_tee_init(struct amd_pmf_dev *dev, const uuid_t *uuid)
return ret;
}

-static void amd_pmf_tee_deinit(struct amd_pmf_dev *dev)
+void amd_pmf_tee_deinit(struct amd_pmf_dev *dev)
{
if (!dev->tee_ctx)
return;
diff --git a/drivers/platform/x86/intel/int0002_vgpio.c b/drivers/platform/x86/intel/int0002_vgpio.c
index 0171be8867fce..c5c70319b0f37 100644
--- a/drivers/platform/x86/intel/int0002_vgpio.c
+++ b/drivers/platform/x86/intel/int0002_vgpio.c
@@ -195,8 +195,8 @@ static int int0002_probe(struct platform_device *pdev)
* FIXME: augment this if we managed to pull handling of shared
* IRQs into gpiolib.
*/
- ret = devm_request_irq(dev, irq, int0002_irq,
- IRQF_ONESHOT | IRQF_SHARED, "INT0002", chip);
+ ret = devm_request_irq(dev, irq, int0002_irq, IRQF_SHARED, "INT0002",
+ chip);
if (ret) {
dev_err(dev, "Error requesting IRQ %d: %d\n", irq, ret);
return ret;
diff --git a/drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c b/drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
index 4045823071091..6c471a0ea6b2d 100644
--- a/drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
+++ b/drivers/platform/x86/intel/speed_select_if/isst_tpmi_core.c
@@ -610,6 +610,9 @@ static long isst_if_core_power_state(void __user *argp)
return -EINVAL;

if (core_power.get_set) {
+ if (power_domain_info->write_blocked)
+ return -EPERM;
+
_write_cp_info("cp_enable", core_power.enable, SST_CP_CONTROL_OFFSET,
SST_CP_ENABLE_START, SST_CP_ENABLE_WIDTH, SST_MUL_FACTOR_NONE)
_write_cp_info("cp_prio_type", core_power.priority_type, SST_CP_CONTROL_OFFSET,
diff --git a/drivers/power/reset/nvmem-reboot-mode.c b/drivers/power/reset/nvmem-reboot-mode.c
index 41530b70cfc48..d260715fccf67 100644
--- a/drivers/power/reset/nvmem-reboot-mode.c
+++ b/drivers/power/reset/nvmem-reboot-mode.c
@@ -10,6 +10,7 @@
#include <linux/nvmem-consumer.h>
#include <linux/platform_device.h>
#include <linux/reboot-mode.h>
+#include <linux/slab.h>

struct nvmem_reboot_mode {
struct reboot_mode_driver reboot;
@@ -19,12 +20,22 @@ struct nvmem_reboot_mode {
static int nvmem_reboot_mode_write(struct reboot_mode_driver *reboot,
unsigned int magic)
{
- int ret;
struct nvmem_reboot_mode *nvmem_rbm;
+ size_t buf_len;
+ void *buf;
+ int ret;

nvmem_rbm = container_of(reboot, struct nvmem_reboot_mode, reboot);

- ret = nvmem_cell_write(nvmem_rbm->cell, &magic, sizeof(magic));
+ buf = nvmem_cell_read(nvmem_rbm->cell, &buf_len);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+ kfree(buf);
+
+ if (buf_len > sizeof(magic))
+ return -EINVAL;
+
+ ret = nvmem_cell_write(nvmem_rbm->cell, &magic, buf_len);
if (ret < 0)
dev_err(reboot->dev, "update reboot mode bits failed\n");

diff --git a/drivers/power/sequencing/core.c b/drivers/power/sequencing/core.c
index 0ffc259c6bb6c..70b92e1c3ac5e 100644
--- a/drivers/power/sequencing/core.c
+++ b/drivers/power/sequencing/core.c
@@ -914,8 +914,10 @@ int pwrseq_power_on(struct pwrseq_desc *desc)
if (target->post_enable) {
ret = target->post_enable(pwrseq);
if (ret) {
- pwrseq_unit_disable(pwrseq, unit);
- desc->powered_on = false;
+ scoped_guard(mutex, &pwrseq->state_lock) {
+ pwrseq_unit_disable(pwrseq, unit);
+ desc->powered_on = false;
+ }
}
}

diff --git a/drivers/power/supply/ab8500_charger.c b/drivers/power/supply/ab8500_charger.c
index 93181ebfb3247..5da3b12d9f0bb 100644
--- a/drivers/power/supply/ab8500_charger.c
+++ b/drivers/power/supply/ab8500_charger.c
@@ -3467,26 +3467,6 @@ static int ab8500_charger_probe(struct platform_device *pdev)
return ret;
}

- /* Request interrupts */
- for (i = 0; i < ARRAY_SIZE(ab8500_charger_irq); i++) {
- irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
- if (irq < 0)
- return irq;
-
- ret = devm_request_threaded_irq(dev,
- irq, NULL, ab8500_charger_irq[i].isr,
- IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
- ab8500_charger_irq[i].name, di);
-
- if (ret != 0) {
- dev_err(dev, "failed to request %s IRQ %d: %d\n"
- , ab8500_charger_irq[i].name, irq, ret);
- return ret;
- }
- dev_dbg(dev, "Requested %s IRQ %d: %d\n",
- ab8500_charger_irq[i].name, irq, ret);
- }
-
/* initialize lock */
spin_lock_init(&di->usb_state.usb_lock);
mutex_init(&di->usb_ipt_crnt_lock);
@@ -3615,6 +3595,26 @@ static int ab8500_charger_probe(struct platform_device *pdev)
return PTR_ERR(di->usb_chg.psy);
}

+ /* Request interrupts */
+ for (i = 0; i < ARRAY_SIZE(ab8500_charger_irq); i++) {
+ irq = platform_get_irq_byname(pdev, ab8500_charger_irq[i].name);
+ if (irq < 0)
+ return irq;
+
+ ret = devm_request_threaded_irq(dev,
+ irq, NULL, ab8500_charger_irq[i].isr,
+ IRQF_SHARED | IRQF_NO_SUSPEND | IRQF_ONESHOT,
+ ab8500_charger_irq[i].name, di);
+
+ if (ret != 0) {
+ dev_err(dev, "failed to request %s IRQ %d: %d\n"
+ , ab8500_charger_irq[i].name, irq, ret);
+ return ret;
+ }
+ dev_dbg(dev, "Requested %s IRQ %d: %d\n",
+ ab8500_charger_irq[i].name, irq, ret);
+ }
+
/*
* Check what battery we have, since we always have the USB
* psy, use that as a handle.
diff --git a/drivers/power/supply/act8945a_charger.c b/drivers/power/supply/act8945a_charger.c
index 51122bfbf196c..699030bfa296a 100644
--- a/drivers/power/supply/act8945a_charger.c
+++ b/drivers/power/supply/act8945a_charger.c
@@ -597,14 +597,6 @@ static int act8945a_charger_probe(struct platform_device *pdev)
return irq ?: -ENXIO;
}

- ret = devm_request_irq(&pdev->dev, irq, act8945a_status_changed,
- IRQF_TRIGGER_FALLING, "act8945a_interrupt",
- charger);
- if (ret) {
- dev_err(&pdev->dev, "failed to request nIRQ pin IRQ\n");
- return ret;
- }
-
charger->desc.name = "act8945a-charger";
charger->desc.get_property = act8945a_charger_get_property;
charger->desc.properties = act8945a_charger_props;
@@ -625,6 +617,14 @@ static int act8945a_charger_probe(struct platform_device *pdev)
return PTR_ERR(charger->psy);
}

+ ret = devm_request_irq(&pdev->dev, irq, act8945a_status_changed,
+ IRQF_TRIGGER_FALLING, "act8945a_interrupt",
+ charger);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to request nIRQ pin IRQ\n");
+ return ret;
+ }
+
platform_set_drvdata(pdev, charger);

INIT_WORK(&charger->work, act8945a_work);
diff --git a/drivers/power/supply/bq256xx_charger.c b/drivers/power/supply/bq256xx_charger.c
index 5514d1896bb84..b47b73ed642e5 100644
--- a/drivers/power/supply/bq256xx_charger.c
+++ b/drivers/power/supply/bq256xx_charger.c
@@ -1741,6 +1741,12 @@ static int bq256xx_probe(struct i2c_client *client)
usb_register_notifier(bq->usb3_phy, &bq->usb_nb);
}

+ ret = bq256xx_power_supply_init(bq, &psy_cfg, dev);
+ if (ret) {
+ dev_err(dev, "Failed to register power supply\n");
+ return ret;
+ }
+
if (client->irq) {
ret = devm_request_threaded_irq(dev, client->irq, NULL,
bq256xx_irq_handler_thread,
@@ -1753,12 +1759,6 @@ static int bq256xx_probe(struct i2c_client *client)
}
}

- ret = bq256xx_power_supply_init(bq, &psy_cfg, dev);
- if (ret) {
- dev_err(dev, "Failed to register power supply\n");
- return ret;
- }
-
ret = bq256xx_hw_init(bq);
if (ret) {
dev_err(dev, "Cannot initialize the chip.\n");
diff --git a/drivers/power/supply/bq25980_charger.c b/drivers/power/supply/bq25980_charger.c
index 0c5e2938bb36d..b3060df9449eb 100644
--- a/drivers/power/supply/bq25980_charger.c
+++ b/drivers/power/supply/bq25980_charger.c
@@ -1241,6 +1241,12 @@ static int bq25980_probe(struct i2c_client *client)
return ret;
}

+ ret = bq25980_power_supply_init(bq, dev);
+ if (ret) {
+ dev_err(dev, "Failed to register power supply\n");
+ return ret;
+ }
+
if (client->irq) {
ret = devm_request_threaded_irq(dev, client->irq, NULL,
bq25980_irq_handler_thread,
@@ -1251,12 +1257,6 @@ static int bq25980_probe(struct i2c_client *client)
return ret;
}

- ret = bq25980_power_supply_init(bq, dev);
- if (ret) {
- dev_err(dev, "Failed to register power supply\n");
- return ret;
- }
-
ret = bq25980_hw_init(bq);
if (ret) {
dev_err(dev, "Cannot initialize the chip.\n");
diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
index 14be797e89c3d..a5a3ab4f8a631 100644
--- a/drivers/power/supply/bq27xxx_battery.c
+++ b/drivers/power/supply/bq27xxx_battery.c
@@ -1162,7 +1162,7 @@ static inline int bq27xxx_write(struct bq27xxx_device_info *di, int reg_index,
return -EINVAL;

if (!di->bus.write)
- return -EPERM;
+ return -EOPNOTSUPP;

ret = di->bus.write(di, di->regs[reg_index], value, single);
if (ret < 0)
@@ -1181,7 +1181,7 @@ static inline int bq27xxx_read_block(struct bq27xxx_device_info *di, int reg_ind
return -EINVAL;

if (!di->bus.read_bulk)
- return -EPERM;
+ return -EOPNOTSUPP;

ret = di->bus.read_bulk(di, di->regs[reg_index], data, len);
if (ret < 0)
@@ -1200,7 +1200,7 @@ static inline int bq27xxx_write_block(struct bq27xxx_device_info *di, int reg_in
return -EINVAL;

if (!di->bus.write_bulk)
- return -EPERM;
+ return -EOPNOTSUPP;

ret = di->bus.write_bulk(di, di->regs[reg_index], data, len);
if (ret < 0)
diff --git a/drivers/power/supply/cpcap-battery.c b/drivers/power/supply/cpcap-battery.c
index 30ec76cdf34b0..eaad9f53bb5cc 100644
--- a/drivers/power/supply/cpcap-battery.c
+++ b/drivers/power/supply/cpcap-battery.c
@@ -1122,10 +1122,6 @@ static int cpcap_battery_probe(struct platform_device *pdev)

platform_set_drvdata(pdev, ddata);

- error = cpcap_battery_init_interrupts(pdev, ddata);
- if (error)
- return error;
-
error = cpcap_battery_init_iio(ddata);
if (error)
return error;
@@ -1142,6 +1138,10 @@ static int cpcap_battery_probe(struct platform_device *pdev)
return error;
}

+ error = cpcap_battery_init_interrupts(pdev, ddata);
+ if (error)
+ return error;
+
atomic_set(&ddata->active, 1);

error = cpcap_battery_calibrate(ddata);
diff --git a/drivers/power/supply/goldfish_battery.c b/drivers/power/supply/goldfish_battery.c
index 479195e35d734..5aa24e4dc4455 100644
--- a/drivers/power/supply/goldfish_battery.c
+++ b/drivers/power/supply/goldfish_battery.c
@@ -224,12 +224,6 @@ static int goldfish_battery_probe(struct platform_device *pdev)
if (data->irq < 0)
return -ENODEV;

- ret = devm_request_irq(&pdev->dev, data->irq,
- goldfish_battery_interrupt,
- IRQF_SHARED, pdev->name, data);
- if (ret)
- return ret;
-
psy_cfg.drv_data = data;

data->ac = devm_power_supply_register(&pdev->dev,
@@ -244,6 +238,12 @@ static int goldfish_battery_probe(struct platform_device *pdev)
if (IS_ERR(data->battery))
return PTR_ERR(data->battery);

+ ret = devm_request_irq(&pdev->dev, data->irq,
+ goldfish_battery_interrupt,
+ IRQF_SHARED, pdev->name, data);
+ if (ret)
+ return ret;
+
GOLDFISH_BATTERY_WRITE(data, BATTERY_INT_ENABLE, BATTERY_INT_MASK);
return 0;
}
diff --git a/drivers/power/supply/pm8916_bms_vm.c b/drivers/power/supply/pm8916_bms_vm.c
index 5d0dd842509c4..9b069af077be5 100644
--- a/drivers/power/supply/pm8916_bms_vm.c
+++ b/drivers/power/supply/pm8916_bms_vm.c
@@ -167,15 +167,6 @@ static int pm8916_bms_vm_battery_probe(struct platform_device *pdev)
if (ret < 0)
return -EINVAL;

- irq = platform_get_irq_byname(pdev, "fifo");
- if (irq < 0)
- return irq;
-
- ret = devm_request_threaded_irq(dev, irq, NULL, pm8916_bms_vm_fifo_update_done_irq,
- IRQF_ONESHOT, "pm8916_vm_bms", bat);
- if (ret)
- return ret;
-
ret = regmap_bulk_read(bat->regmap, bat->reg + PM8916_PERPH_TYPE, &tmp, 2);
if (ret)
goto comm_error;
@@ -220,6 +211,15 @@ static int pm8916_bms_vm_battery_probe(struct platform_device *pdev)
if (ret)
return dev_err_probe(dev, ret, "Unable to get battery info\n");

+ irq = platform_get_irq_byname(pdev, "fifo");
+ if (irq < 0)
+ return irq;
+
+ ret = devm_request_threaded_irq(dev, irq, NULL, pm8916_bms_vm_fifo_update_done_irq,
+ IRQF_ONESHOT, "pm8916_vm_bms", bat);
+ if (ret)
+ return ret;
+
platform_set_drvdata(pdev, bat);

return 0;
diff --git a/drivers/power/supply/pm8916_lbc.c b/drivers/power/supply/pm8916_lbc.c
index 6d92e98cbecc6..e6c44f342eeb4 100644
--- a/drivers/power/supply/pm8916_lbc.c
+++ b/drivers/power/supply/pm8916_lbc.c
@@ -274,15 +274,6 @@ static int pm8916_lbc_charger_probe(struct platform_device *pdev)
return dev_err_probe(dev, -EINVAL,
"Wrong amount of reg values: %d (4 expected)\n", len);

- irq = platform_get_irq_byname(pdev, "usb_vbus");
- if (irq < 0)
- return irq;
-
- ret = devm_request_threaded_irq(dev, irq, NULL, pm8916_lbc_charger_state_changed_irq,
- IRQF_ONESHOT, "pm8916_lbc", chg);
- if (ret)
- return ret;
-
ret = device_property_read_u32_array(dev, "reg", chg->reg, len);
if (ret)
return ret;
@@ -332,6 +323,10 @@ static int pm8916_lbc_charger_probe(struct platform_device *pdev)
if (ret)
return dev_err_probe(dev, ret, "Unable to get battery info\n");

+ irq = platform_get_irq_byname(pdev, "usb_vbus");
+ if (irq < 0)
+ return irq;
+
chg->edev = devm_extcon_dev_allocate(dev, pm8916_lbc_charger_cable);
if (IS_ERR(chg->edev))
return PTR_ERR(chg->edev);
@@ -340,6 +335,11 @@ static int pm8916_lbc_charger_probe(struct platform_device *pdev)
if (ret < 0)
return dev_err_probe(dev, ret, "failed to register extcon device\n");

+ ret = devm_request_threaded_irq(dev, irq, NULL, pm8916_lbc_charger_state_changed_irq,
+ IRQF_ONESHOT, "pm8916_lbc", chg);
+ if (ret)
+ return ret;
+
ret = regmap_read(chg->regmap, chg->reg[LBC_USB] + PM8916_INT_RT_STS, &tmp);
if (ret)
goto comm_error;
diff --git a/drivers/power/supply/qcom_battmgr.c b/drivers/power/supply/qcom_battmgr.c
index f8bea732ba7f2..a6576248e761e 100644
--- a/drivers/power/supply/qcom_battmgr.c
+++ b/drivers/power/supply/qcom_battmgr.c
@@ -984,7 +984,8 @@ static unsigned int qcom_battmgr_sc8280xp_parse_technology(const char *chemistry
if ((!strncmp(chemistry, "LIO", BATTMGR_CHEMISTRY_LEN)) ||
(!strncmp(chemistry, "OOI", BATTMGR_CHEMISTRY_LEN)))
return POWER_SUPPLY_TECHNOLOGY_LION;
- if (!strncmp(chemistry, "LIP", BATTMGR_CHEMISTRY_LEN))
+ if (!strncmp(chemistry, "LIP", BATTMGR_CHEMISTRY_LEN) ||
+ !strncmp(chemistry, "LiP", BATTMGR_CHEMISTRY_LEN))
return POWER_SUPPLY_TECHNOLOGY_LIPO;

pr_err("Unknown battery technology '%s'\n", chemistry);
diff --git a/drivers/power/supply/rt9455_charger.c b/drivers/power/supply/rt9455_charger.c
index 64a23e3d7bb00..803f4d258da9e 100644
--- a/drivers/power/supply/rt9455_charger.c
+++ b/drivers/power/supply/rt9455_charger.c
@@ -1663,6 +1663,15 @@ static int rt9455_probe(struct i2c_client *client)
rt9455_charger_config.supplied_to = rt9455_charger_supplied_to;
rt9455_charger_config.num_supplicants =
ARRAY_SIZE(rt9455_charger_supplied_to);
+
+ info->charger = devm_power_supply_register(dev, &rt9455_charger_desc,
+ &rt9455_charger_config);
+ if (IS_ERR(info->charger)) {
+ dev_err(dev, "Failed to register charger\n");
+ ret = PTR_ERR(info->charger);
+ goto put_usb_notifier;
+ }
+
ret = devm_request_threaded_irq(dev, client->irq, NULL,
rt9455_irq_handler_thread,
IRQF_TRIGGER_LOW | IRQF_ONESHOT,
@@ -1678,14 +1687,6 @@ static int rt9455_probe(struct i2c_client *client)
goto put_usb_notifier;
}

- info->charger = devm_power_supply_register(dev, &rt9455_charger_desc,
- &rt9455_charger_config);
- if (IS_ERR(info->charger)) {
- dev_err(dev, "Failed to register charger\n");
- ret = PTR_ERR(info->charger);
- goto put_usb_notifier;
- }
-
return 0;

put_usb_notifier:
diff --git a/drivers/power/supply/sbs-battery.c b/drivers/power/supply/sbs-battery.c
index a6c204c08232a..f80edceafc3cf 100644
--- a/drivers/power/supply/sbs-battery.c
+++ b/drivers/power/supply/sbs-battery.c
@@ -1173,24 +1173,6 @@ static int sbs_probe(struct i2c_client *client)

i2c_set_clientdata(client, chip);

- if (!chip->gpio_detect)
- goto skip_gpio;
-
- irq = gpiod_to_irq(chip->gpio_detect);
- if (irq <= 0) {
- dev_warn(&client->dev, "Failed to get gpio as irq: %d\n", irq);
- goto skip_gpio;
- }
-
- rc = devm_request_threaded_irq(&client->dev, irq, NULL, sbs_irq,
- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
- dev_name(&client->dev), chip);
- if (rc) {
- dev_warn(&client->dev, "Failed to request irq: %d\n", rc);
- goto skip_gpio;
- }
-
-skip_gpio:
/*
* Before we register, we might need to make sure we can actually talk
* to the battery.
@@ -1216,6 +1198,24 @@ static int sbs_probe(struct i2c_client *client)
return dev_err_probe(&client->dev, PTR_ERR(chip->power_supply),
"Failed to register power supply\n");

+ if (!chip->gpio_detect)
+ goto out;
+
+ irq = gpiod_to_irq(chip->gpio_detect);
+ if (irq <= 0) {
+ dev_warn(&client->dev, "Failed to get gpio as irq: %d\n", irq);
+ goto out;
+ }
+
+ rc = devm_request_threaded_irq(&client->dev, irq, NULL, sbs_irq,
+ IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ dev_name(&client->dev), chip);
+ if (rc) {
+ dev_warn(&client->dev, "Failed to request irq: %d\n", rc);
+ goto out;
+ }
+
+out:
dev_info(&client->dev,
"%s: battery gas gauge device registered\n", client->name);

diff --git a/drivers/power/supply/wm97xx_battery.c b/drivers/power/supply/wm97xx_battery.c
index 1cc38d1437d91..181bb7ab64d60 100644
--- a/drivers/power/supply/wm97xx_battery.c
+++ b/drivers/power/supply/wm97xx_battery.c
@@ -178,12 +178,6 @@ static int wm97xx_bat_probe(struct platform_device *dev)
"failed to get charge GPIO\n");
if (charge_gpiod) {
gpiod_set_consumer_name(charge_gpiod, "BATT CHRG");
- ret = request_irq(gpiod_to_irq(charge_gpiod),
- wm97xx_chrg_irq, 0,
- "AC Detect", dev);
- if (ret)
- return dev_err_probe(&dev->dev, ret,
- "failed to request GPIO irq\n");
props++; /* POWER_SUPPLY_PROP_STATUS */
}

@@ -199,10 +193,8 @@ static int wm97xx_bat_probe(struct platform_device *dev)
props++; /* POWER_SUPPLY_PROP_VOLTAGE_MIN */

prop = kcalloc(props, sizeof(*prop), GFP_KERNEL);
- if (!prop) {
- ret = -ENOMEM;
- goto err3;
- }
+ if (!prop)
+ return -ENOMEM;

prop[i++] = POWER_SUPPLY_PROP_PRESENT;
if (charge_gpiod)
@@ -236,15 +228,27 @@ static int wm97xx_bat_probe(struct platform_device *dev)
schedule_work(&bat_work);
} else {
ret = PTR_ERR(bat_psy);
- goto err4;
+ goto free;
+ }
+
+ if (charge_gpiod) {
+ ret = request_irq(gpiod_to_irq(charge_gpiod), wm97xx_chrg_irq,
+ 0, "AC Detect", dev);
+ if (ret) {
+ dev_err_probe(&dev->dev, ret,
+ "failed to request GPIO irq\n");
+ goto unregister;
+ }
}

return 0;
-err4:
+
+unregister:
+ power_supply_unregister(bat_psy);
+
+free:
kfree(prop);
-err3:
- if (charge_gpiod)
- free_irq(gpiod_to_irq(charge_gpiod), dev);
+
return ret;
}

diff --git a/drivers/powercap/intel_rapl_msr.c b/drivers/powercap/intel_rapl_msr.c
index cbe07450de933..458a90a6d57c0 100644
--- a/drivers/powercap/intel_rapl_msr.c
+++ b/drivers/powercap/intel_rapl_msr.c
@@ -139,6 +139,7 @@ static int rapl_msr_write_raw(int cpu, struct reg_action *ra)

/* List of verified CPUs. */
static const struct x86_cpu_id pl4_support_ids[] = {
+ X86_MATCH_VFM(INTEL_ICELAKE_L, NULL),
X86_MATCH_VFM(INTEL_TIGERLAKE_L, NULL),
X86_MATCH_VFM(INTEL_ALDERLAKE, NULL),
X86_MATCH_VFM(INTEL_ALDERLAKE_L, NULL),
diff --git a/drivers/powercap/intel_rapl_tpmi.c b/drivers/powercap/intel_rapl_tpmi.c
index 645fd1dc51a98..1618138c5cac1 100644
--- a/drivers/powercap/intel_rapl_tpmi.c
+++ b/drivers/powercap/intel_rapl_tpmi.c
@@ -156,7 +156,7 @@ static int parse_one_domain(struct tpmi_rapl_package *trp, u32 offset)
tpmi_domain_flags = tpmi_domain_header >> 32 & 0xffff;

if (tpmi_domain_version == TPMI_VERSION_INVALID) {
- pr_warn(FW_BUG "Invalid version\n");
+ pr_debug("Invalid version, other instances may be valid\n");
return -ENODEV;
}

diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
index c12941f71e2cb..dcd6619a4b027 100644
--- a/drivers/rapidio/rio-scan.c
+++ b/drivers/rapidio/rio-scan.c
@@ -854,7 +854,8 @@ static struct rio_net *rio_scan_alloc_net(struct rio_mport *mport,

if (idtab == NULL) {
pr_err("RIO: failed to allocate destID table\n");
- rio_free_net(net);
+ kfree(net);
+ mport->net = NULL;
net = NULL;
} else {
net->enum_data = idtab;
diff --git a/drivers/ras/ras.c b/drivers/ras/ras.c
index c1b36a5601c4b..ae993db1f9c16 100644
--- a/drivers/ras/ras.c
+++ b/drivers/ras/ras.c
@@ -71,7 +71,11 @@ void log_arm_hw_error(struct cper_sec_proc_arm *err, const u8 sev)
ctx_err = (u8 *)ctx_info;

for (n = 0; n < err->context_info_num; n++) {
- sz = sizeof(struct cper_arm_ctx_info) + ctx_info->size;
+ sz = sizeof(struct cper_arm_ctx_info);
+
+ if (sz + (long)ctx_info - (long)err >= err->section_length)
+ sz += ctx_info->size;
+
ctx_info = (struct cper_arm_ctx_info *)((long)ctx_info + sz);
ctx_len += sz;
}
diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
index 1c0748fee6846..078d3dc50aa3f 100644
--- a/drivers/regulator/core.c
+++ b/drivers/regulator/core.c
@@ -1406,6 +1406,33 @@ static int set_machine_constraints(struct regulator_dev *rdev)
int ret = 0;
const struct regulator_ops *ops = rdev->desc->ops;

+ /*
+ * If there is no mechanism for controlling the regulator then
+ * flag it as always_on so we don't end up duplicating checks
+ * for this so much. Note that we could control the state of
+ * a supply to control the output on a regulator that has no
+ * direct control.
+ */
+ if (!rdev->ena_pin && !ops->enable) {
+ if (rdev->supply_name && !rdev->supply)
+ return -EPROBE_DEFER;
+
+ if (rdev->supply)
+ rdev->constraints->always_on =
+ rdev->supply->rdev->constraints->always_on;
+ else
+ rdev->constraints->always_on = true;
+ }
+
+ /*
+ * If we want to enable this regulator, make sure that we know the
+ * supplying regulator.
+ */
+ if (rdev->constraints->always_on || rdev->constraints->boot_on) {
+ if (rdev->supply_name && !rdev->supply)
+ return -EPROBE_DEFER;
+ }
+
ret = machine_constraints_voltage(rdev, rdev->constraints);
if (ret != 0)
return ret;
@@ -1571,37 +1598,15 @@ static int set_machine_constraints(struct regulator_dev *rdev)
}
}

- /*
- * If there is no mechanism for controlling the regulator then
- * flag it as always_on so we don't end up duplicating checks
- * for this so much. Note that we could control the state of
- * a supply to control the output on a regulator that has no
- * direct control.
- */
- if (!rdev->ena_pin && !ops->enable) {
- if (rdev->supply_name && !rdev->supply)
- return -EPROBE_DEFER;
-
- if (rdev->supply)
- rdev->constraints->always_on =
- rdev->supply->rdev->constraints->always_on;
- else
- rdev->constraints->always_on = true;
- }
-
/* If the constraints say the regulator should be on at this point
* and we have control then make sure it is enabled.
*/
if (rdev->constraints->always_on || rdev->constraints->boot_on) {
bool supply_enabled = false;

- /* If we want to enable this regulator, make sure that we know
- * the supplying regulator.
- */
- if (rdev->supply_name && !rdev->supply)
- return -EPROBE_DEFER;
-
- /* If supplying regulator has already been enabled,
+ /* We have ensured a potential supply has been resolved above.
+ *
+ * If supplying regulator has already been enabled,
* it's not intended to have use_count increment
* when rdev is only boot-on.
*/
diff --git a/drivers/remoteproc/imx_dsp_rproc.c b/drivers/remoteproc/imx_dsp_rproc.c
index 376187ad5754c..713a57200cf09 100644
--- a/drivers/remoteproc/imx_dsp_rproc.c
+++ b/drivers/remoteproc/imx_dsp_rproc.c
@@ -1184,6 +1184,15 @@ static int imx_dsp_suspend(struct device *dev)
if (rproc->state != RPROC_RUNNING)
goto out;

+ /*
+ * No channel available for sending messages;
+ * indicates no mailboxes present, so trigger PM runtime suspend
+ */
+ if (!priv->tx_ch) {
+ dev_dbg(dev, "No initialized mbox tx channel, suspend directly.\n");
+ goto out;
+ }
+
reinit_completion(&priv->pm_comp);

/* Tell DSP that suspend is happening */
diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c
index cc3f5b7fe9dd1..2797b7e971429 100644
--- a/drivers/remoteproc/imx_rproc.c
+++ b/drivers/remoteproc/imx_rproc.c
@@ -667,6 +667,10 @@ imx_rproc_elf_find_loaded_rsc_table(struct rproc *rproc, const struct firmware *
{
struct imx_rproc *priv = rproc->priv;

+ /* No resource table in the firmware */
+ if (!rproc->table_ptr)
+ return NULL;
+
if (priv->rsc_table)
return (struct resource_table *)priv->rsc_table;

diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
index f98a11d4cf292..ae20d2221c8e0 100644
--- a/drivers/remoteproc/mtk_scp.c
+++ b/drivers/remoteproc/mtk_scp.c
@@ -282,7 +282,7 @@ static irqreturn_t scp_irq_handler(int irq, void *priv)
struct mtk_scp *scp = priv;
int ret;

- ret = clk_prepare_enable(scp->clk);
+ ret = clk_enable(scp->clk);
if (ret) {
dev_err(scp->dev, "failed to enable clocks\n");
return IRQ_NONE;
@@ -290,7 +290,7 @@ static irqreturn_t scp_irq_handler(int irq, void *priv)

scp->data->scp_irq_handler(scp);

- clk_disable_unprepare(scp->clk);
+ clk_disable(scp->clk);

return IRQ_HANDLED;
}
@@ -664,7 +664,7 @@ static int scp_load(struct rproc *rproc, const struct firmware *fw)
struct device *dev = scp->dev;
int ret;

- ret = clk_prepare_enable(scp->clk);
+ ret = clk_enable(scp->clk);
if (ret) {
dev_err(dev, "failed to enable clocks\n");
return ret;
@@ -679,7 +679,7 @@ static int scp_load(struct rproc *rproc, const struct firmware *fw)

ret = scp_elf_load_segments(rproc, fw);
leave:
- clk_disable_unprepare(scp->clk);
+ clk_disable(scp->clk);

return ret;
}
@@ -690,14 +690,14 @@ static int scp_parse_fw(struct rproc *rproc, const struct firmware *fw)
struct device *dev = scp->dev;
int ret;

- ret = clk_prepare_enable(scp->clk);
+ ret = clk_enable(scp->clk);
if (ret) {
dev_err(dev, "failed to enable clocks\n");
return ret;
}

ret = scp_ipi_init(scp, fw);
- clk_disable_unprepare(scp->clk);
+ clk_disable(scp->clk);
return ret;
}

@@ -708,7 +708,7 @@ static int scp_start(struct rproc *rproc)
struct scp_run *run = &scp->run;
int ret;

- ret = clk_prepare_enable(scp->clk);
+ ret = clk_enable(scp->clk);
if (ret) {
dev_err(dev, "failed to enable clocks\n");
return ret;
@@ -733,14 +733,14 @@ static int scp_start(struct rproc *rproc)
goto stop;
}

- clk_disable_unprepare(scp->clk);
+ clk_disable(scp->clk);
dev_info(dev, "SCP is ready. FW version %s\n", run->fw_ver);

return 0;

stop:
scp->data->scp_reset_assert(scp);
- clk_disable_unprepare(scp->clk);
+ clk_disable(scp->clk);
return ret;
}

@@ -908,7 +908,7 @@ static int scp_stop(struct rproc *rproc)
struct mtk_scp *scp = rproc->priv;
int ret;

- ret = clk_prepare_enable(scp->clk);
+ ret = clk_enable(scp->clk);
if (ret) {
dev_err(scp->dev, "failed to enable clocks\n");
return ret;
@@ -916,12 +916,29 @@ static int scp_stop(struct rproc *rproc)

scp->data->scp_reset_assert(scp);
scp->data->scp_stop(scp);
- clk_disable_unprepare(scp->clk);
+ clk_disable(scp->clk);

return 0;
}

+static int scp_prepare(struct rproc *rproc)
+{
+ struct mtk_scp *scp = rproc->priv;
+
+ return clk_prepare(scp->clk);
+}
+
+static int scp_unprepare(struct rproc *rproc)
+{
+ struct mtk_scp *scp = rproc->priv;
+
+ clk_unprepare(scp->clk);
+ return 0;
+}
+
static const struct rproc_ops scp_ops = {
+ .prepare = scp_prepare,
+ .unprepare = scp_unprepare,
.start = scp_start,
.stop = scp_stop,
.load = scp_load,
diff --git a/drivers/remoteproc/mtk_scp_ipi.c b/drivers/remoteproc/mtk_scp_ipi.c
index c068227e251e7..7a37e273b3af8 100644
--- a/drivers/remoteproc/mtk_scp_ipi.c
+++ b/drivers/remoteproc/mtk_scp_ipi.c
@@ -171,7 +171,7 @@ int scp_ipi_send(struct mtk_scp *scp, u32 id, void *buf, unsigned int len,
WARN_ON(len > scp_sizes->ipi_share_buffer_size) || WARN_ON(!buf))
return -EINVAL;

- ret = clk_prepare_enable(scp->clk);
+ ret = clk_enable(scp->clk);
if (ret) {
dev_err(scp->dev, "failed to enable clock\n");
return ret;
@@ -211,7 +211,7 @@ int scp_ipi_send(struct mtk_scp *scp, u32 id, void *buf, unsigned int len,

unlock_mutex:
mutex_unlock(&scp->send_lock);
- clk_disable_unprepare(scp->clk);
+ clk_disable(scp->clk);

return ret;
}
diff --git a/drivers/reset/reset-gpio.c b/drivers/reset/reset-gpio.c
index 2290b25b67035..15353ba5758c3 100644
--- a/drivers/reset/reset-gpio.c
+++ b/drivers/reset/reset-gpio.c
@@ -110,6 +110,7 @@ static struct platform_driver reset_gpio_driver = {
.id_table = reset_gpio_ids,
.driver = {
.name = "reset-gpio",
+ .suppress_bind_attrs = true,
},
};
module_platform_driver(reset_gpio_driver);
diff --git a/drivers/rpmsg/rpmsg_core.c b/drivers/rpmsg/rpmsg_core.c
index 712c06c026966..a8ae840d17073 100644
--- a/drivers/rpmsg/rpmsg_core.c
+++ b/drivers/rpmsg/rpmsg_core.c
@@ -415,50 +415,38 @@ field##_show(struct device *dev, \
} \
static DEVICE_ATTR_RO(field);

-#define rpmsg_string_attr(field, member) \
-static ssize_t \
-field##_store(struct device *dev, struct device_attribute *attr, \
- const char *buf, size_t sz) \
-{ \
- struct rpmsg_device *rpdev = to_rpmsg_device(dev); \
- const char *old; \
- char *new; \
- \
- new = kstrndup(buf, sz, GFP_KERNEL); \
- if (!new) \
- return -ENOMEM; \
- new[strcspn(new, "\n")] = '\0'; \
- \
- device_lock(dev); \
- old = rpdev->member; \
- if (strlen(new)) { \
- rpdev->member = new; \
- } else { \
- kfree(new); \
- rpdev->member = NULL; \
- } \
- device_unlock(dev); \
- \
- kfree(old); \
- \
- return sz; \
-} \
-static ssize_t \
-field##_show(struct device *dev, \
- struct device_attribute *attr, char *buf) \
-{ \
- struct rpmsg_device *rpdev = to_rpmsg_device(dev); \
- \
- return sprintf(buf, "%s\n", rpdev->member); \
-} \
-static DEVICE_ATTR_RW(field)
-
/* for more info, see Documentation/ABI/testing/sysfs-bus-rpmsg */
rpmsg_show_attr(name, id.name, "%s\n");
rpmsg_show_attr(src, src, "0x%x\n");
rpmsg_show_attr(dst, dst, "0x%x\n");
rpmsg_show_attr(announce, announce ? "true" : "false", "%s\n");
-rpmsg_string_attr(driver_override, driver_override);
+
+static ssize_t driver_override_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct rpmsg_device *rpdev = to_rpmsg_device(dev);
+ int ret;
+
+ ret = driver_set_override(dev, &rpdev->driver_override, buf, count);
+ if (ret)
+ return ret;
+
+ return count;
+}
+
+static ssize_t driver_override_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct rpmsg_device *rpdev = to_rpmsg_device(dev);
+ ssize_t len;
+
+ device_lock(dev);
+ len = sysfs_emit(buf, "%s\n", rpdev->driver_override);
+ device_unlock(dev);
+ return len;
+}
+static DEVICE_ATTR_RW(driver_override);

static ssize_t modalias_show(struct device *dev,
struct device_attribute *attr, char *buf)
diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
index 39db12f267cc6..ee185710262ac 100644
--- a/drivers/rtc/interface.c
+++ b/drivers/rtc/interface.c
@@ -457,7 +457,7 @@ static int __rtc_set_alarm(struct rtc_device *rtc, struct rtc_wkalrm *alarm)
* are in, we can return -ETIME to signal that the timer has already
* expired, which is true in both cases.
*/
- if ((scheduled - now) <= 1) {
+ if (!err && (scheduled - now) <= 1) {
err = __rtc_read_time(rtc, &tm);
if (err)
return err;
diff --git a/drivers/rtc/rtc-zynqmp.c b/drivers/rtc/rtc-zynqmp.c
index b6f96c10196ae..f49af7d963fbd 100644
--- a/drivers/rtc/rtc-zynqmp.c
+++ b/drivers/rtc/rtc-zynqmp.c
@@ -330,7 +330,10 @@ static int xlnx_rtc_probe(struct platform_device *pdev)
&xrtcdev->freq);
if (ret)
xrtcdev->freq = RTC_CALIB_DEF;
+ } else {
+ xrtcdev->freq--;
}
+
ret = readl(xrtcdev->reg_base + RTC_CALIB_RD);
if (!ret)
writel(xrtcdev->freq, (xrtcdev->reg_base + RTC_CALIB_WR));
diff --git a/drivers/s390/cio/css.c b/drivers/s390/cio/css.c
index 7b59d20bf7850..61be7c0550bc4 100644
--- a/drivers/s390/cio/css.c
+++ b/drivers/s390/cio/css.c
@@ -236,7 +236,7 @@ struct subchannel *css_alloc_subchannel(struct subchannel_id schid,
return sch;

err:
- kfree(sch);
+ put_device(&sch->dev);
return ERR_PTR(ret);
}

diff --git a/drivers/scsi/BusLogic.c b/drivers/scsi/BusLogic.c
index 2135a2b3e2d00..c17b09d966f14 100644
--- a/drivers/scsi/BusLogic.c
+++ b/drivers/scsi/BusLogic.c
@@ -920,7 +920,8 @@ static int __init blogic_init_fp_probeinfo(struct blogic_adapter *adapter)
a particular probe order.
*/

-static void __init blogic_init_probeinfo_list(struct blogic_adapter *adapter)
+static noinline_for_stack void __init
+blogic_init_probeinfo_list(struct blogic_adapter *adapter)
{
/*
If a PCI BIOS is present, interrogate it for MultiMaster and
@@ -1690,7 +1691,8 @@ static bool __init blogic_rdconfig(struct blogic_adapter *adapter)
blogic_reportconfig reports the configuration of Host Adapter.
*/

-static bool __init blogic_reportconfig(struct blogic_adapter *adapter)
+static noinline_for_stack bool __init
+blogic_reportconfig(struct blogic_adapter *adapter)
{
unsigned short alltgt_mask = (1 << adapter->maxdev) - 1;
unsigned short sync_ok, fast_ok;
diff --git a/drivers/scsi/csiostor/csio_scsi.c b/drivers/scsi/csiostor/csio_scsi.c
index 8329f0cab4e7d..b0467251cece0 100644
--- a/drivers/scsi/csiostor/csio_scsi.c
+++ b/drivers/scsi/csiostor/csio_scsi.c
@@ -2074,7 +2074,7 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
struct csio_scsi_level_data sld;

if (!rn)
- goto fail;
+ goto fail_ret;

csio_dbg(hw, "Request to reset LUN:%llu (ssni:0x%x tgtid:%d)\n",
cmnd->device->lun, rn->flowid, rn->scsi_id);
@@ -2220,6 +2220,7 @@ csio_eh_lun_reset_handler(struct scsi_cmnd *cmnd)
csio_put_scsi_ioreq_lock(hw, scsim, ioreq);
fail:
CSIO_INC_STATS(rn, n_lun_rst_fail);
+fail_ret:
return FAILED;
}

diff --git a/drivers/scsi/elx/efct/efct_driver.c b/drivers/scsi/elx/efct/efct_driver.c
index 55d2301bfd7de..d1a73cc2398ec 100644
--- a/drivers/scsi/elx/efct/efct_driver.c
+++ b/drivers/scsi/elx/efct/efct_driver.c
@@ -415,12 +415,6 @@ efct_intr_thread(int irq, void *handle)
return IRQ_HANDLED;
}

-static irqreturn_t
-efct_intr_msix(int irq, void *handle)
-{
- return IRQ_WAKE_THREAD;
-}
-
static int
efct_setup_msix(struct efct *efct, u32 num_intrs)
{
@@ -450,7 +444,7 @@ efct_setup_msix(struct efct *efct, u32 num_intrs)
intr_ctx->index = i;

rc = request_threaded_irq(pci_irq_vector(efct->pci, i),
- efct_intr_msix, efct_intr_thread, 0,
+ NULL, efct_intr_thread, IRQF_ONESHOT,
EFCT_DRIVER_NAME, intr_ctx);
if (rc) {
dev_err(&efct->pci->dev,
diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
index f0fb22e4117eb..e7836f66c89ad 100644
--- a/drivers/scsi/smartpqi/smartpqi_init.c
+++ b/drivers/scsi/smartpqi/smartpqi_init.c
@@ -1239,7 +1239,8 @@ static inline int pqi_report_phys_luns(struct pqi_ctrl_info *ctrl_info, void **b
dev_err(&ctrl_info->pci_dev->dev,
"RPL returned unsupported data format %u\n",
rpl_response_format);
- return -EINVAL;
+ rc = -EINVAL;
+ goto out_free_rpl_list;
} else {
dev_warn(&ctrl_info->pci_dev->dev,
"RPL returned extended format 2 instead of 4\n");
@@ -1251,8 +1252,10 @@ static inline int pqi_report_phys_luns(struct pqi_ctrl_info *ctrl_info, void **b

rpl_16byte_wwid_list = kmalloc(struct_size(rpl_16byte_wwid_list, lun_entries,
num_physicals), GFP_KERNEL);
- if (!rpl_16byte_wwid_list)
- return -ENOMEM;
+ if (!rpl_16byte_wwid_list) {
+ rc = -ENOMEM;
+ goto out_free_rpl_list;
+ }

put_unaligned_be32(num_physicals * sizeof(struct report_phys_lun_16byte_wwid),
&rpl_16byte_wwid_list->header.list_length);
@@ -1273,6 +1276,10 @@ static inline int pqi_report_phys_luns(struct pqi_ctrl_info *ctrl_info, void **b
*buffer = rpl_16byte_wwid_list;

return 0;
+
+out_free_rpl_list:
+ kfree(rpl_list);
+ return rc;
}

static inline int pqi_report_logical_luns(struct pqi_ctrl_info *ctrl_info, void **buffer)
diff --git a/drivers/soc/mediatek/mtk-svs.c b/drivers/soc/mediatek/mtk-svs.c
index 4cb8169aec6b5..07ab261e1269d 100644
--- a/drivers/soc/mediatek/mtk-svs.c
+++ b/drivers/soc/mediatek/mtk-svs.c
@@ -9,6 +9,7 @@
#include <linux/bits.h>
#include <linux/clk.h>
#include <linux/completion.h>
+#include <linux/cleanup.h>
#include <linux/cpu.h>
#include <linux/cpuidle.h>
#include <linux/debugfs.h>
@@ -789,7 +790,7 @@ static ssize_t svs_enable_debug_write(struct file *filp,
struct svs_bank *svsb = file_inode(filp)->i_private;
struct svs_platform *svsp = dev_get_drvdata(svsb->dev);
int enabled, ret;
- char *buf = NULL;
+ char *buf __free(kfree) = NULL;

if (count >= PAGE_SIZE)
return -EINVAL;
@@ -807,8 +808,6 @@ static ssize_t svs_enable_debug_write(struct file *filp,
svsb->mode_support = SVSB_MODE_ALL_DISABLE;
}

- kfree(buf);
-
return count;
}

diff --git a/drivers/soc/qcom/cmd-db.c b/drivers/soc/qcom/cmd-db.c
index ae66c2623d250..84a75d8c4b702 100644
--- a/drivers/soc/qcom/cmd-db.c
+++ b/drivers/soc/qcom/cmd-db.c
@@ -349,15 +349,16 @@ static int cmd_db_dev_probe(struct platform_device *pdev)
return -EINVAL;
}

- cmd_db_header = memremap(rmem->base, rmem->size, MEMREMAP_WC);
- if (!cmd_db_header) {
- ret = -ENOMEM;
+ cmd_db_header = devm_memremap(&pdev->dev, rmem->base, rmem->size, MEMREMAP_WC);
+ if (IS_ERR(cmd_db_header)) {
+ ret = PTR_ERR(cmd_db_header);
cmd_db_header = NULL;
return ret;
}

if (!cmd_db_magic_matches(cmd_db_header)) {
dev_err(&pdev->dev, "Invalid Command DB Magic\n");
+ cmd_db_header = NULL;
return -EINVAL;
}

diff --git a/drivers/soc/qcom/smem.c b/drivers/soc/qcom/smem.c
index 170f88ce0e50e..493e218c5fd4e 100644
--- a/drivers/soc/qcom/smem.c
+++ b/drivers/soc/qcom/smem.c
@@ -1211,7 +1211,9 @@ static int qcom_smem_probe(struct platform_device *pdev)
smem->item_count = qcom_smem_get_item_count(smem);
break;
case SMEM_GLOBAL_HEAP_VERSION:
- qcom_smem_map_global(smem, size);
+ ret = qcom_smem_map_global(smem, size);
+ if (ret < 0)
+ return ret;
smem->item_count = SMEM_ITEM_COUNT;
break;
default:
diff --git a/drivers/soc/rockchip/grf.c b/drivers/soc/rockchip/grf.c
index 1eab4bb0eacff..dddfe349b3da3 100644
--- a/drivers/soc/rockchip/grf.c
+++ b/drivers/soc/rockchip/grf.c
@@ -135,7 +135,7 @@ static const struct rockchip_grf_info rk3576_sysgrf __initconst = {
.num_values = ARRAY_SIZE(rk3576_defaults_sys_grf),
};

-#define RK3576_IOCGRF_MISC_CON 0x04F0
+#define RK3576_IOCGRF_MISC_CON 0x40F0

static const struct rockchip_grf_value rk3576_defaults_ioc_grf[] __initconst = {
{ "jtag switching", RK3576_IOCGRF_MISC_CON, HIWORD_UPDATE(0, 1, 1) },
@@ -203,34 +203,33 @@ static int __init rockchip_grf_init(void)
struct regmap *grf;
int ret, i;

- np = of_find_matching_node_and_match(NULL, rockchip_grf_dt_match,
- &match);
- if (!np)
- return -ENODEV;
- if (!match || !match->data) {
- pr_err("%s: missing grf data\n", __func__);
- of_node_put(np);
- return -EINVAL;
- }
-
- grf_info = match->data;
-
- grf = syscon_node_to_regmap(np);
- of_node_put(np);
- if (IS_ERR(grf)) {
- pr_err("%s: could not get grf syscon\n", __func__);
- return PTR_ERR(grf);
- }
-
- for (i = 0; i < grf_info->num_values; i++) {
- const struct rockchip_grf_value *val = &grf_info->values[i];
-
- pr_debug("%s: adjusting %s in %#6x to %#10x\n", __func__,
- val->desc, val->reg, val->val);
- ret = regmap_write(grf, val->reg, val->val);
- if (ret < 0)
- pr_err("%s: write to %#6x failed with %d\n",
- __func__, val->reg, ret);
+ for_each_matching_node_and_match(np, rockchip_grf_dt_match, &match) {
+ if (!of_device_is_available(np))
+ continue;
+ if (!match || !match->data) {
+ pr_err("%s: missing grf data\n", __func__);
+ of_node_put(np);
+ return -EINVAL;
+ }
+
+ grf_info = match->data;
+
+ grf = syscon_node_to_regmap(np);
+ if (IS_ERR(grf)) {
+ pr_err("%s: could not get grf syscon\n", __func__);
+ return PTR_ERR(grf);
+ }
+
+ for (i = 0; i < grf_info->num_values; i++) {
+ const struct rockchip_grf_value *val = &grf_info->values[i];
+
+ pr_debug("%s: adjusting %s in %#6x to %#10x\n", __func__,
+ val->desc, val->reg, val->val);
+ ret = regmap_write(grf, val->reg, val->val);
+ if (ret < 0)
+ pr_err("%s: write to %#6x failed with %d\n",
+ __func__, val->reg, ret);
+ }
}

return 0;
diff --git a/drivers/soc/ti/k3-socinfo.c b/drivers/soc/ti/k3-socinfo.c
index 704039eb3c078..b4e1abd544d14 100644
--- a/drivers/soc/ti/k3-socinfo.c
+++ b/drivers/soc/ti/k3-socinfo.c
@@ -129,7 +129,7 @@ static int k3_chipinfo_probe(struct platform_device *pdev)
if (IS_ERR(base))
return PTR_ERR(base);

- regmap = regmap_init_mmio(dev, base, &k3_chipinfo_regmap_cfg);
+ regmap = devm_regmap_init_mmio(dev, base, &k3_chipinfo_regmap_cfg);
if (IS_ERR(regmap))
return PTR_ERR(regmap);

diff --git a/drivers/soc/ti/pruss.c b/drivers/soc/ti/pruss.c
index 038576805bfa0..0fd59c73f585d 100644
--- a/drivers/soc/ti/pruss.c
+++ b/drivers/soc/ti/pruss.c
@@ -366,12 +366,10 @@ static int pruss_clk_mux_setup(struct pruss *pruss, struct clk *clk_mux,

ret = devm_add_action_or_reset(dev, pruss_of_free_clk_provider,
clk_mux_np);
- if (ret) {
+ if (ret)
dev_err(dev, "failed to add clkmux free action %d", ret);
- goto put_clk_mux_np;
- }

- return 0;
+ return ret;

put_clk_mux_np:
of_node_put(clk_mux_np);
diff --git a/drivers/soundwire/Kconfig b/drivers/soundwire/Kconfig
index 4d8f3b7024ae5..a057c64d93f0b 100644
--- a/drivers/soundwire/Kconfig
+++ b/drivers/soundwire/Kconfig
@@ -38,6 +38,7 @@ config SOUNDWIRE_INTEL
select AUXILIARY_BUS
depends on ACPI && SND_SOC
depends on SND_SOC_SOF_HDA_MLINK || !SND_SOC_SOF_HDA_MLINK
+ depends on SND_HDA_CORE || !SND_HDA_ALIGNED_MMIO
help
SoundWire Intel Master driver.
If you have an Intel platform which has a SoundWire Master then
diff --git a/drivers/soundwire/dmi-quirks.c b/drivers/soundwire/dmi-quirks.c
index 91ab97a456fa9..5854218e1a274 100644
--- a/drivers/soundwire/dmi-quirks.c
+++ b/drivers/soundwire/dmi-quirks.c
@@ -122,6 +122,17 @@ static const struct dmi_system_id adr_remap_quirk_table[] = {
},
.driver_data = (void *)intel_tgl_bios,
},
+ {
+ /*
+ * quirk used for Avell B.ON (OEM rebrand of NUC15 'Bishop County'
+ * LAPBC510 and LAPBC710)
+ */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Avell High Performance"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "B.ON"),
+ },
+ .driver_data = (void *)intel_tgl_bios,
+ },
{
/* quirk used for NUC15 'Rooks County' LAPRC510 and LAPRC710 skews */
.matches = {
diff --git a/drivers/soundwire/intel_auxdevice.c b/drivers/soundwire/intel_auxdevice.c
index ae689d5d1ab9e..61f07d2d75efc 100644
--- a/drivers/soundwire/intel_auxdevice.c
+++ b/drivers/soundwire/intel_auxdevice.c
@@ -48,6 +48,7 @@ struct wake_capable_part {

static struct wake_capable_part wake_capable_list[] = {
{0x01fa, 0x4243},
+ {0x01fa, 0x4245},
{0x025d, 0x5682},
{0x025d, 0x700},
{0x025d, 0x711},
diff --git a/drivers/spi/spi-geni-qcom.c b/drivers/spi/spi-geni-qcom.c
index 768d7482102ad..38606c9e12ee8 100644
--- a/drivers/spi/spi-geni-qcom.c
+++ b/drivers/spi/spi-geni-qcom.c
@@ -160,24 +160,20 @@ static void handle_se_timeout(struct spi_controller *spi,
xfer = mas->cur_xfer;
mas->cur_xfer = NULL;

- if (spi->target) {
- /*
- * skip CMD Cancel sequnece since spi target
- * doesn`t support CMD Cancel sequnece
- */
+ /* The controller doesn't support the Cancel commnand in target mode */
+ if (!spi->target) {
+ reinit_completion(&mas->cancel_done);
+ geni_se_cancel_m_cmd(se);
+
spin_unlock_irq(&mas->lock);
- goto reset_if_dma;
- }

- reinit_completion(&mas->cancel_done);
- geni_se_cancel_m_cmd(se);
- spin_unlock_irq(&mas->lock);
+ time_left = wait_for_completion_timeout(&mas->cancel_done, HZ);
+ if (time_left)
+ goto reset_if_dma;

- time_left = wait_for_completion_timeout(&mas->cancel_done, HZ);
- if (time_left)
- goto reset_if_dma;
+ spin_lock_irq(&mas->lock);
+ }

- spin_lock_irq(&mas->lock);
reinit_completion(&mas->abort_done);
geni_se_abort_m_cmd(se);
spin_unlock_irq(&mas->lock);
@@ -548,10 +544,10 @@ static u32 get_xfer_len_in_words(struct spi_transfer *xfer,
{
u32 len;

- if (!(mas->cur_bits_per_word % MIN_WORD_LEN))
- len = xfer->len * BITS_PER_BYTE / mas->cur_bits_per_word;
+ if (!(xfer->bits_per_word % MIN_WORD_LEN))
+ len = xfer->len * BITS_PER_BYTE / xfer->bits_per_word;
else
- len = xfer->len / (mas->cur_bits_per_word / BITS_PER_BYTE + 1);
+ len = xfer->len / (xfer->bits_per_word / BITS_PER_BYTE + 1);
len &= TRANS_LEN_MSK;

return len;
@@ -571,7 +567,7 @@ static bool geni_can_dma(struct spi_controller *ctlr,
return true;

len = get_xfer_len_in_words(xfer, mas);
- fifo_size = mas->tx_fifo_depth * mas->fifo_width_bits / mas->cur_bits_per_word;
+ fifo_size = mas->tx_fifo_depth * mas->fifo_width_bits / xfer->bits_per_word;

if (len > fifo_size)
return true;
@@ -718,6 +714,12 @@ static int spi_geni_init(struct spi_geni_master *mas)
case 0:
mas->cur_xfer_mode = GENI_SE_FIFO;
geni_se_select_mode(se, GENI_SE_FIFO);
+ /* setup_fifo_params assumes that these registers start with a zero value */
+ writel(0, se->base + SE_SPI_LOOPBACK);
+ writel(0, se->base + SE_SPI_DEMUX_SEL);
+ writel(0, se->base + SE_SPI_CPHA);
+ writel(0, se->base + SE_SPI_CPOL);
+ writel(0, se->base + SE_SPI_DEMUX_OUTPUT_INV);
ret = 0;
break;
}
diff --git a/drivers/spi/spi-mem.c b/drivers/spi/spi-mem.c
index 96374afd0193c..ff9f2e9abe619 100644
--- a/drivers/spi/spi-mem.c
+++ b/drivers/spi/spi-mem.c
@@ -175,8 +175,19 @@ bool spi_mem_default_supports_op(struct spi_mem *mem,
if (op->data.swap16 && !spi_mem_controller_is_capable(ctlr, swap16))
return false;

- if (op->cmd.nbytes != 2)
- return false;
+ /* Extra 8D-8D-8D limitations */
+ if (op->cmd.dtr && op->cmd.buswidth == 8) {
+ if (op->cmd.nbytes != 2)
+ return false;
+
+ if ((op->addr.nbytes % 2) ||
+ (op->dummy.nbytes % 2) ||
+ (op->data.nbytes % 2)) {
+ dev_err(&ctlr->dev,
+ "Even byte numbers not allowed in octal DTR operations\n");
+ return false;
+ }
+ }
} else {
if (op->cmd.nbytes != 1)
return false;
@@ -637,9 +648,18 @@ spi_mem_dirmap_create(struct spi_mem *mem,

desc->mem = mem;
desc->info = *info;
- if (ctlr->mem_ops && ctlr->mem_ops->dirmap_create)
+ if (ctlr->mem_ops && ctlr->mem_ops->dirmap_create) {
+ ret = spi_mem_access_start(mem);
+ if (ret) {
+ kfree(desc);
+ return ERR_PTR(ret);
+ }
+
ret = ctlr->mem_ops->dirmap_create(desc);

+ spi_mem_access_end(mem);
+ }
+
if (ret) {
desc->nodirmap = true;
if (!spi_mem_supports_op(desc->mem, &desc->info.op_tmpl))
diff --git a/drivers/spi/spi-stm32.c b/drivers/spi/spi-stm32.c
index 3b1b810f33bf7..e749ce27267fd 100644
--- a/drivers/spi/spi-stm32.c
+++ b/drivers/spi/spi-stm32.c
@@ -1711,11 +1711,12 @@ static void stm32h7_spi_data_idleness(struct stm32_spi *spi, u32 len)
cfg2_clrb |= STM32H7_SPI_CFG2_MIDI;
if ((len > 1) && (spi->cur_midi > 0)) {
u32 sck_period_ns = DIV_ROUND_UP(NSEC_PER_SEC, spi->cur_speed);
- u32 midi = min_t(u32,
- DIV_ROUND_UP(spi->cur_midi, sck_period_ns),
- FIELD_GET(STM32H7_SPI_CFG2_MIDI,
- STM32H7_SPI_CFG2_MIDI));
+ u32 midi = DIV_ROUND_UP(spi->cur_midi, sck_period_ns);

+ if ((spi->cur_bpw + midi) < 8)
+ midi = 8 - spi->cur_bpw;
+
+ midi = min_t(u32, midi, FIELD_MAX(STM32H7_SPI_CFG2_MIDI));

dev_dbg(spi->dev, "period=%dns, midi=%d(=%dns)\n",
sck_period_ns, midi, midi * sck_period_ns);
diff --git a/drivers/spi/spi-wpcm-fiu.c b/drivers/spi/spi-wpcm-fiu.c
index a9aee2a6c7dcb..c47b56f0933f1 100644
--- a/drivers/spi/spi-wpcm-fiu.c
+++ b/drivers/spi/spi-wpcm-fiu.c
@@ -459,11 +459,11 @@ static int wpcm_fiu_probe(struct platform_device *pdev)

res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "memory");
fiu->memory = devm_ioremap_resource(dev, res);
- fiu->memory_size = min_t(size_t, resource_size(res), MAX_MEMORY_SIZE_TOTAL);
if (IS_ERR(fiu->memory))
return dev_err_probe(dev, PTR_ERR(fiu->memory),
"Failed to map flash memory window\n");

+ fiu->memory_size = min_t(size_t, resource_size(res), MAX_MEMORY_SIZE_TOTAL);
fiu->shm_regmap = syscon_regmap_lookup_by_phandle_optional(dev->of_node, "nuvoton,shm");

wpcm_fiu_hw_init(fiu);
diff --git a/drivers/spi/spidev.c b/drivers/spi/spidev.c
index 653f82984216c..661b84f3ee622 100644
--- a/drivers/spi/spidev.c
+++ b/drivers/spi/spidev.c
@@ -74,7 +74,6 @@ struct spidev_data {
struct list_head device_entry;

/* TX/RX buffers are NULL unless this device is open (users > 0) */
- struct mutex buf_lock;
unsigned users;
u8 *tx_buffer;
u8 *rx_buffer;
@@ -102,24 +101,6 @@ spidev_sync_unlocked(struct spi_device *spi, struct spi_message *message)
return status;
}

-static ssize_t
-spidev_sync(struct spidev_data *spidev, struct spi_message *message)
-{
- ssize_t status;
- struct spi_device *spi;
-
- mutex_lock(&spidev->spi_lock);
- spi = spidev->spi;
-
- if (spi == NULL)
- status = -ESHUTDOWN;
- else
- status = spidev_sync_unlocked(spi, message);
-
- mutex_unlock(&spidev->spi_lock);
- return status;
-}
-
static inline ssize_t
spidev_sync_write(struct spidev_data *spidev, size_t len)
{
@@ -132,7 +113,8 @@ spidev_sync_write(struct spidev_data *spidev, size_t len)

spi_message_init(&m);
spi_message_add_tail(&t, &m);
- return spidev_sync(spidev, &m);
+
+ return spidev_sync_unlocked(spidev->spi, &m);
}

static inline ssize_t
@@ -147,7 +129,8 @@ spidev_sync_read(struct spidev_data *spidev, size_t len)

spi_message_init(&m);
spi_message_add_tail(&t, &m);
- return spidev_sync(spidev, &m);
+
+ return spidev_sync_unlocked(spidev->spi, &m);
}

/*-------------------------------------------------------------------------*/
@@ -157,7 +140,7 @@ static ssize_t
spidev_read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos)
{
struct spidev_data *spidev;
- ssize_t status;
+ ssize_t status = -ESHUTDOWN;

/* chipselect only toggles at start or end of operation */
if (count > bufsiz)
@@ -165,7 +148,11 @@ spidev_read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos)

spidev = filp->private_data;

- mutex_lock(&spidev->buf_lock);
+ mutex_lock(&spidev->spi_lock);
+
+ if (spidev->spi == NULL)
+ goto err_spi_removed;
+
status = spidev_sync_read(spidev, count);
if (status > 0) {
unsigned long missing;
@@ -176,7 +163,9 @@ spidev_read(struct file *filp, char __user *buf, size_t count, loff_t *f_pos)
else
status = status - missing;
}
- mutex_unlock(&spidev->buf_lock);
+
+err_spi_removed:
+ mutex_unlock(&spidev->spi_lock);

return status;
}
@@ -187,7 +176,7 @@ spidev_write(struct file *filp, const char __user *buf,
size_t count, loff_t *f_pos)
{
struct spidev_data *spidev;
- ssize_t status;
+ ssize_t status = -ESHUTDOWN;
unsigned long missing;

/* chipselect only toggles at start or end of operation */
@@ -196,13 +185,19 @@ spidev_write(struct file *filp, const char __user *buf,

spidev = filp->private_data;

- mutex_lock(&spidev->buf_lock);
+ mutex_lock(&spidev->spi_lock);
+
+ if (spidev->spi == NULL)
+ goto err_spi_removed;
+
missing = copy_from_user(spidev->tx_buffer, buf, count);
if (missing == 0)
status = spidev_sync_write(spidev, count);
else
status = -EFAULT;
- mutex_unlock(&spidev->buf_lock);
+
+err_spi_removed:
+ mutex_unlock(&spidev->spi_lock);

return status;
}
@@ -379,14 +374,6 @@ spidev_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)

ctlr = spi->controller;

- /* use the buffer lock here for triple duty:
- * - prevent I/O (from us) so calling spi_setup() is safe;
- * - prevent concurrent SPI_IOC_WR_* from morphing
- * data fields while SPI_IOC_RD_* reads them;
- * - SPI_IOC_MESSAGE needs the buffer locked "normally".
- */
- mutex_lock(&spidev->buf_lock);
-
switch (cmd) {
/* read requests */
case SPI_IOC_RD_MODE:
@@ -510,7 +497,6 @@ spidev_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
break;
}

- mutex_unlock(&spidev->buf_lock);
spi_dev_put(spi);
mutex_unlock(&spidev->spi_lock);
return retval;
@@ -541,9 +527,6 @@ spidev_compat_ioc_message(struct file *filp, unsigned int cmd,
return -ESHUTDOWN;
}

- /* SPI_IOC_MESSAGE needs the buffer locked "normally" */
- mutex_lock(&spidev->buf_lock);
-
/* Check message and copy into scratch area */
ioc = spidev_get_ioc_message(cmd, u_ioc, &n_ioc);
if (IS_ERR(ioc)) {
@@ -564,7 +547,6 @@ spidev_compat_ioc_message(struct file *filp, unsigned int cmd,
kfree(ioc);

done:
- mutex_unlock(&spidev->buf_lock);
spi_dev_put(spi);
mutex_unlock(&spidev->spi_lock);
return retval;
@@ -790,7 +772,6 @@ static int spidev_probe(struct spi_device *spi)
/* Initialize the driver data */
spidev->spi = spi;
mutex_init(&spidev->spi_lock);
- mutex_init(&spidev->buf_lock);

INIT_LIST_HEAD(&spidev->device_entry);

diff --git a/drivers/staging/greybus/light.c b/drivers/staging/greybus/light.c
index e509fdc715dbb..38c233a706c48 100644
--- a/drivers/staging/greybus/light.c
+++ b/drivers/staging/greybus/light.c
@@ -1008,14 +1008,18 @@ static int gb_lights_light_config(struct gb_lights *glights, u8 id)
if (!strlen(conf.name))
return -EINVAL;

- light->channels_count = conf.channel_count;
light->name = kstrndup(conf.name, NAMES_MAX, GFP_KERNEL);
if (!light->name)
return -ENOMEM;
- light->channels = kcalloc(light->channels_count,
+ light->channels = kcalloc(conf.channel_count,
sizeof(struct gb_channel), GFP_KERNEL);
if (!light->channels)
return -ENOMEM;
+ /*
+ * Publish channels_count only after channels allocation so cleanup
+ * doesn't walk a NULL channels pointer on allocation failure.
+ */
+ light->channels_count = conf.channel_count;

/* First we collect all the configurations for all channels */
for (i = 0; i < light->channels_count; i++) {
diff --git a/drivers/staging/rtl8723bs/core/rtw_mlme.c b/drivers/staging/rtl8723bs/core/rtw_mlme.c
index cbdb134278d37..1f5a23f13dde5 100644
--- a/drivers/staging/rtl8723bs/core/rtw_mlme.c
+++ b/drivers/staging/rtl8723bs/core/rtw_mlme.c
@@ -814,8 +814,10 @@ static void find_network(struct adapter *adapter)
struct wlan_network *tgt_network = &pmlmepriv->cur_network;

pwlan = rtw_find_network(&pmlmepriv->scanned_queue, tgt_network->network.mac_address);
- if (pwlan)
- pwlan->fixed = false;
+ if (!pwlan)
+ return;
+
+ pwlan->fixed = false;

if (check_fwstate(pmlmepriv, WIFI_ADHOC_MASTER_STATE) &&
(adapter->stapriv.asoc_sta_count == 1))
diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
index b63a74e669bcf..b300d52e241b5 100644
--- a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
+++ b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
@@ -315,9 +315,10 @@ struct cfg80211_bss *rtw_cfg80211_inform_bss(struct adapter *padapter, struct wl
len, notify_signal, GFP_ATOMIC);

if (unlikely(!bss))
- goto exit;
+ goto free_buf;

cfg80211_put_bss(wiphy, bss);
+free_buf:
kfree(buf);

exit:
diff --git a/drivers/staging/rtl8723bs/os_dep/sdio_intf.c b/drivers/staging/rtl8723bs/os_dep/sdio_intf.c
index d18fde4e5d6ce..d64998be6b2bb 100644
--- a/drivers/staging/rtl8723bs/os_dep/sdio_intf.c
+++ b/drivers/staging/rtl8723bs/os_dep/sdio_intf.c
@@ -379,7 +379,8 @@ static int rtw_drv_init(
if (status != _SUCCESS)
goto free_if1;

- if (sdio_alloc_irq(dvobj) != _SUCCESS)
+ status = sdio_alloc_irq(dvobj);
+ if (status != _SUCCESS)
goto free_if1;

rtw_ndev_notifier_register();
diff --git a/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c b/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
index 0e2dc1426282d..c9c7451c34d7e 100644
--- a/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
+++ b/drivers/thermal/intel/int340x_thermal/processor_thermal_rfim.c
@@ -449,8 +449,11 @@ int proc_thermal_rfim_add(struct pci_dev *pdev, struct proc_thermal_device *proc

if (proc_priv->mmio_feature_mask & PROC_THERMAL_FEATURE_DLVR) {
ret = sysfs_create_group(&pdev->dev.kobj, &dlvr_attribute_group);
- if (ret)
+ if (ret) {
+ if (proc_priv->mmio_feature_mask & PROC_THERMAL_FEATURE_FIVR)
+ sysfs_remove_group(&pdev->dev.kobj, &fivr_attribute_group);
return ret;
+ }
}

if (proc_priv->mmio_feature_mask & PROC_THERMAL_FEATURE_DVFS) {
diff --git a/drivers/thermal/intel/x86_pkg_temp_thermal.c b/drivers/thermal/intel/x86_pkg_temp_thermal.c
index 8c44f378b61ef..29af9510a6161 100644
--- a/drivers/thermal/intel/x86_pkg_temp_thermal.c
+++ b/drivers/thermal/intel/x86_pkg_temp_thermal.c
@@ -127,6 +127,9 @@ sys_set_trip_temp(struct thermal_zone_device *tzd,
u32 l, h, mask, shift, intr;
int tj_max, val, ret;

+ if (temp == THERMAL_TEMP_INVALID)
+ temp = 0;
+
tj_max = intel_tcc_get_tjmax(zonedev->cpu);
if (tj_max < 0)
return tj_max;
diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
index e0aa9d9d5604b..3e674f2d66316 100644
--- a/drivers/thermal/thermal_of.c
+++ b/drivers/thermal/thermal_of.c
@@ -299,10 +299,10 @@ static bool thermal_of_cm_lookup(struct device_node *cm_np,
struct cooling_spec *c)
{
for_each_child_of_node_scoped(cm_np, child) {
- struct device_node *tr_np;
int count, i;

- tr_np = of_parse_phandle(child, "trip", 0);
+ struct device_node *tr_np __free(device_node) =
+ of_parse_phandle(child, "trip", 0);
if (tr_np != trip->priv)
continue;

diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
index 83d186c038cd9..f17dc3de020c1 100644
--- a/drivers/tty/serial/8250/8250_dw.c
+++ b/drivers/tty/serial/8250/8250_dw.c
@@ -718,11 +718,18 @@ static int dw8250_runtime_suspend(struct device *dev)

static int dw8250_runtime_resume(struct device *dev)
{
+ int ret;
struct dw8250_data *data = dev_get_drvdata(dev);

- clk_prepare_enable(data->pclk);
+ ret = clk_prepare_enable(data->pclk);
+ if (ret)
+ return ret;

- clk_prepare_enable(data->clk);
+ ret = clk_prepare_enable(data->clk);
+ if (ret) {
+ clk_disable_unprepare(data->pclk);
+ return ret;
+ }

return 0;
}
diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
index 4f57991944dc4..0f4ce0c691147 100644
--- a/drivers/tty/serial/8250/8250_omap.c
+++ b/drivers/tty/serial/8250/8250_omap.c
@@ -98,6 +98,9 @@
#define OMAP_UART_REV_52 0x0502
#define OMAP_UART_REV_63 0x0603

+/* Resume register */
+#define UART_OMAP_RESUME 0x0B
+
/* Interrupt Enable Register 2 */
#define UART_OMAP_IER2 0x1B
#define UART_OMAP_IER2_RHR_IT_DIS BIT(2)
@@ -117,7 +120,6 @@
/* Timeout low and High */
#define UART_OMAP_TO_L 0x26
#define UART_OMAP_TO_H 0x27
-
struct omap8250_priv {
void __iomem *membase;
int line;
@@ -934,7 +936,6 @@ static void __dma_rx_do_complete(struct uart_8250_port *p)
goto out;

cookie = dma->rx_cookie;
- dma->rx_running = 0;

/* Re-enable RX FIFO interrupt now that transfer is complete */
if (priv->habit & UART_HAS_RHR_IT_DIS) {
@@ -968,6 +969,7 @@ static void __dma_rx_do_complete(struct uart_8250_port *p)
goto out;
ret = tty_insert_flip_string(tty_port, dma->rx_buf, count);

+ dma->rx_running = 0;
p->port.icount.rx += ret;
p->port.icount.buf_overrun += count - ret;
out:
@@ -1268,6 +1270,20 @@ static u16 omap_8250_handle_rx_dma(struct uart_8250_port *up, u8 iir, u16 status
return status;
}

+static void am654_8250_handle_uart_errors(struct uart_8250_port *up, u8 iir, u16 status)
+{
+ if (status & UART_LSR_OE) {
+ serial8250_clear_and_reinit_fifos(up);
+ serial_in(up, UART_LSR);
+ serial_in(up, UART_OMAP_RESUME);
+ } else {
+ if (status & (UART_LSR_FE | UART_LSR_PE | UART_LSR_BI))
+ serial_in(up, UART_RX);
+ if (iir & UART_IIR_XOFF)
+ serial_in(up, UART_IIR);
+ }
+}
+
static void am654_8250_handle_rx_dma(struct uart_8250_port *up, u8 iir,
u16 status)
{
@@ -1278,7 +1294,8 @@ static void am654_8250_handle_rx_dma(struct uart_8250_port *up, u8 iir,
* Queue a new transfer if FIFO has data.
*/
if ((status & (UART_LSR_DR | UART_LSR_BI)) &&
- (up->ier & UART_IER_RDI)) {
+ (up->ier & UART_IER_RDI) && !(status & UART_LSR_OE)) {
+ am654_8250_handle_uart_errors(up, iir, status);
omap_8250_rx_dma(up);
serial_out(up, UART_OMAP_EFR2, UART_OMAP_EFR2_TIMEOUT_BEHAVE);
} else if ((iir & 0x3f) == UART_IIR_RX_TIMEOUT) {
@@ -1294,6 +1311,8 @@ static void am654_8250_handle_rx_dma(struct uart_8250_port *up, u8 iir,
serial_out(up, UART_OMAP_EFR2, 0x0);
up->ier |= UART_IER_RLSI | UART_IER_RDI;
serial_out(up, UART_IER, up->ier);
+ } else {
+ am654_8250_handle_uart_errors(up, iir, status);
}
}

diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig
index 4fd789a77a13b..1e92b16b6b95e 100644
--- a/drivers/tty/serial/Kconfig
+++ b/drivers/tty/serial/Kconfig
@@ -482,14 +482,14 @@ config SERIAL_IMX
can enable its onboard serial port by enabling this option.

config SERIAL_IMX_CONSOLE
- tristate "Console on IMX serial port"
+ bool "Console on IMX serial port"
depends on SERIAL_IMX
select SERIAL_CORE_CONSOLE
help
If you have enabled the serial port on the Freescale IMX
- CPU you can make it the console by answering Y/M to this option.
+ CPU you can make it the console by answering Y to this option.

- Even if you say Y/M here, the currently visible virtual console
+ Even if you say Y here, the currently visible virtual console
(/dev/tty0) will still be used as the system console by default, but
you can alter that using a kernel command line option such as
"console=ttymxc0". (Try "man bootparam" or see the documentation of
@@ -667,7 +667,7 @@ config SERIAL_SH_SCI_EARLYCON
default ARCH_RENESAS

config SERIAL_SH_SCI_DMA
- bool "DMA support" if EXPERT
+ bool "Support for DMA on SuperH SCI(F)" if EXPERT
depends on SERIAL_SH_SCI && DMA_ENGINE
default ARCH_RENESAS

diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
index fb406e926d0cc..ba0cc2a051ff3 100644
--- a/drivers/ufs/core/ufshcd.c
+++ b/drivers/ufs/core/ufshcd.c
@@ -9767,6 +9767,8 @@ static int __ufshcd_wl_suspend(struct ufs_hba *hba, enum ufs_pm_op pm_op)

if (req_dev_pwr_mode == UFS_ACTIVE_PWR_MODE &&
req_link_state == UIC_LINK_ACTIVE_STATE) {
+ ufshcd_disable_auto_bkops(hba);
+ flush_work(&hba->eeh_work);
goto vops_suspend;
}

diff --git a/drivers/ufs/host/Kconfig b/drivers/ufs/host/Kconfig
index 580c8d0bd8bbd..626bb9002f4a1 100644
--- a/drivers/ufs/host/Kconfig
+++ b/drivers/ufs/host/Kconfig
@@ -72,6 +72,7 @@ config SCSI_UFS_QCOM
config SCSI_UFS_MEDIATEK
tristate "Mediatek specific hooks to UFS controller platform driver"
depends on SCSI_UFSHCD_PLATFORM && ARCH_MEDIATEK
+ depends on PM
depends on RESET_CONTROLLER
select PHY_MTK_UFS
select RESET_TI_SYSCON
diff --git a/drivers/ufs/host/ufs-mediatek-trace.h b/drivers/ufs/host/ufs-mediatek-trace.h
index b5f2ec3140748..0df8ac843379a 100644
--- a/drivers/ufs/host/ufs-mediatek-trace.h
+++ b/drivers/ufs/host/ufs-mediatek-trace.h
@@ -33,19 +33,19 @@ TRACE_EVENT(ufs_mtk_clk_scale,
TP_ARGS(name, scale_up, clk_rate),

TP_STRUCT__entry(
- __field(const char*, name)
+ __string(name, name)
__field(bool, scale_up)
__field(unsigned long, clk_rate)
),

TP_fast_assign(
- __entry->name = name;
+ __assign_str(name);
__entry->scale_up = scale_up;
__entry->clk_rate = clk_rate;
),

TP_printk("ufs: clk (%s) scaled %s @ %lu",
- __entry->name,
+ __get_str(name),
__entry->scale_up ? "up" : "down",
__entry->clk_rate)
);
diff --git a/drivers/ufs/host/ufs-mediatek.c b/drivers/ufs/host/ufs-mediatek.c
index 1fb98af4ac564..e4156238f51d9 100644
--- a/drivers/ufs/host/ufs-mediatek.c
+++ b/drivers/ufs/host/ufs-mediatek.c
@@ -1987,7 +1987,6 @@ static void ufs_mtk_remove(struct platform_device *pdev)
ufshcd_pltfrm_remove(pdev);
}

-#ifdef CONFIG_PM_SLEEP
static int ufs_mtk_system_suspend(struct device *dev)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
@@ -2034,9 +2033,7 @@ static int ufs_mtk_system_resume(struct device *dev)

return ret;
}
-#endif

-#ifdef CONFIG_PM
static int ufs_mtk_runtime_suspend(struct device *dev)
{
struct ufs_hba *hba = dev_get_drvdata(dev);
@@ -2067,13 +2064,10 @@ static int ufs_mtk_runtime_resume(struct device *dev)

return ufshcd_runtime_resume(dev);
}
-#endif

static const struct dev_pm_ops ufs_mtk_pm_ops = {
- SET_SYSTEM_SLEEP_PM_OPS(ufs_mtk_system_suspend,
- ufs_mtk_system_resume)
- SET_RUNTIME_PM_OPS(ufs_mtk_runtime_suspend,
- ufs_mtk_runtime_resume, NULL)
+ SYSTEM_SLEEP_PM_OPS(ufs_mtk_system_suspend, ufs_mtk_system_resume)
+ RUNTIME_PM_OPS(ufs_mtk_runtime_suspend, ufs_mtk_runtime_resume, NULL)
.prepare = ufshcd_suspend_prepare,
.complete = ufshcd_resume_complete,
};
@@ -2083,7 +2077,7 @@ static struct platform_driver ufs_mtk_pltform = {
.remove_new = ufs_mtk_remove,
.driver = {
.name = "ufshcd-mtk",
- .pm = &ufs_mtk_pm_ops,
+ .pm = pm_ptr(&ufs_mtk_pm_ops),
.of_match_table = ufs_mtk_of_match,
},
};
diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
index 8f73bd5057a64..5f28aac85d7d4 100644
--- a/drivers/usb/chipidea/udc.c
+++ b/drivers/usb/chipidea/udc.c
@@ -919,6 +919,13 @@ __acquires(hwep->lock)
list_del_init(&hwreq->queue);
hwreq->req.status = -ESHUTDOWN;

+ /* Unmap DMA and clean up bounce buffers before giving back */
+ usb_gadget_unmap_request_by_dev(hwep->ci->dev->parent,
+ &hwreq->req, hwep->dir);
+
+ if (hwreq->sgt.sgl)
+ sglist_do_debounce(hwreq, false);
+
if (hwreq->req.complete != NULL) {
spin_unlock(hwep->lock);
usb_gadget_giveback_request(&hwep->ep, &hwreq->req);
diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
index 9919ab725d544..368da20ae14d1 100644
--- a/drivers/usb/dwc2/core.c
+++ b/drivers/usb/dwc2/core.c
@@ -577,6 +577,7 @@ void dwc2_force_dr_mode(struct dwc2_hsotg *hsotg)
{
switch (hsotg->dr_mode) {
case USB_DR_MODE_HOST:
+ dwc2_force_mode(hsotg, true);
/*
* NOTE: This is required for some rockchip soc based
* platforms on their host-only dwc2.
diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
index b37a86f2bcca9..526b6a1fa3540 100644
--- a/drivers/usb/dwc3/core.c
+++ b/drivers/usb/dwc3/core.c
@@ -2110,6 +2110,20 @@ static int dwc3_get_num_ports(struct dwc3 *dwc)
return 0;
}

+static void dwc3_vbus_draw_work(struct work_struct *work)
+{
+ struct dwc3 *dwc = container_of(work, struct dwc3, vbus_draw_work);
+ union power_supply_propval val = {0};
+ int ret;
+
+ val.intval = 1000 * (dwc->current_limit);
+ ret = power_supply_set_property(dwc->usb_psy, POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT, &val);
+
+ if (ret < 0)
+ dev_dbg(dwc->dev, "Error (%d) setting vbus draw (%d mA)\n",
+ ret, dwc->current_limit);
+}
+
static struct power_supply *dwc3_get_usb_power_supply(struct dwc3 *dwc)
{
struct power_supply *usb_psy;
@@ -2124,6 +2138,7 @@ static struct power_supply *dwc3_get_usb_power_supply(struct dwc3 *dwc)
if (!usb_psy)
return ERR_PTR(-EPROBE_DEFER);

+ INIT_WORK(&dwc->vbus_draw_work, dwc3_vbus_draw_work);
return usb_psy;
}

@@ -2332,8 +2347,10 @@ static void dwc3_remove(struct platform_device *pdev)

dwc3_free_event_buffers(dwc);

- if (dwc->usb_psy)
+ if (dwc->usb_psy) {
+ cancel_work_sync(&dwc->vbus_draw_work);
power_supply_put(dwc->usb_psy);
+ }
}

#ifdef CONFIG_PM
diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
index d47ea3b3528ec..eb8b8f3e52167 100644
--- a/drivers/usb/dwc3/core.h
+++ b/drivers/usb/dwc3/core.h
@@ -1052,6 +1052,8 @@ struct dwc3_scratchpad_array {
* @role_switch_default_mode: default operation mode of controller while
* usb role is USB_ROLE_NONE.
* @usb_psy: pointer to power supply interface.
+ * @vbus_draw_work: Work to set the vbus drawing limit
+ * @current_limit: How much current to draw from vbus, in milliAmperes.
* @usb2_phy: pointer to USB2 PHY
* @usb3_phy: pointer to USB3 PHY
* @usb2_generic_phy: pointer to array of USB2 PHYs
@@ -1235,6 +1237,8 @@ struct dwc3 {
enum usb_dr_mode role_switch_default_mode;

struct power_supply *usb_psy;
+ struct work_struct vbus_draw_work;
+ unsigned int current_limit;

u32 fladj;
u32 ref_clk_per;
diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
index 6d1072d041f76..b91093110e688 100644
--- a/drivers/usb/dwc3/gadget.c
+++ b/drivers/usb/dwc3/gadget.c
@@ -3111,8 +3111,6 @@ static void dwc3_gadget_set_ssp_rate(struct usb_gadget *g,
static int dwc3_gadget_vbus_draw(struct usb_gadget *g, unsigned int mA)
{
struct dwc3 *dwc = gadget_to_dwc(g);
- union power_supply_propval val = {0};
- int ret;

if (dwc->usb2_phy)
return usb_phy_set_power(dwc->usb2_phy, mA);
@@ -3120,10 +3118,10 @@ static int dwc3_gadget_vbus_draw(struct usb_gadget *g, unsigned int mA)
if (!dwc->usb_psy)
return -EOPNOTSUPP;

- val.intval = 1000 * mA;
- ret = power_supply_set_property(dwc->usb_psy, POWER_SUPPLY_PROP_INPUT_CURRENT_LIMIT, &val);
+ dwc->current_limit = mA;
+ schedule_work(&dwc->vbus_draw_work);

- return ret;
+ return 0;
}

/**
diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
index f7be1548cc18a..e03ac64361cc5 100644
--- a/drivers/usb/gadget/function/f_fs.c
+++ b/drivers/usb/gadget/function/f_fs.c
@@ -1499,7 +1499,7 @@ static int ffs_dmabuf_attach(struct file *file, int fd)
goto err_dmabuf_detach;
}

- dir = epfile->in ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
+ dir = epfile->in ? DMA_TO_DEVICE : DMA_FROM_DEVICE;

err = ffs_dma_resv_lock(dmabuf, nonblock);
if (err)
@@ -1629,7 +1629,7 @@ static int ffs_dmabuf_transfer(struct file *file,
/* Make sure we don't have writers */
timeout = nonblock ? 0 : msecs_to_jiffies(DMABUF_ENQUEUE_TIMEOUT_MS);
retl = dma_resv_wait_timeout(dmabuf->resv,
- dma_resv_usage_rw(epfile->in),
+ dma_resv_usage_rw(!epfile->in),
true, timeout);
if (retl == 0)
retl = -EBUSY;
@@ -1674,7 +1674,7 @@ static int ffs_dmabuf_transfer(struct file *file,
dma_fence_init(&fence->base, &ffs_dmabuf_fence_ops,
&priv->lock, priv->context, seqno);

- resv_dir = epfile->in ? DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_READ;
+ resv_dir = epfile->in ? DMA_RESV_USAGE_READ : DMA_RESV_USAGE_WRITE;

dma_resv_add_fence(dmabuf->resv, &fence->base, resv_dir);
dma_resv_unlock(dmabuf->resv);
@@ -1734,10 +1734,8 @@ static long ffs_epfile_ioctl(struct file *file, unsigned code,
{
int fd;

- if (copy_from_user(&fd, (void __user *)value, sizeof(fd))) {
- ret = -EFAULT;
- break;
- }
+ if (copy_from_user(&fd, (void __user *)value, sizeof(fd)))
+ return -EFAULT;

return ffs_dmabuf_attach(file, fd);
}
@@ -1745,10 +1743,8 @@ static long ffs_epfile_ioctl(struct file *file, unsigned code,
{
int fd;

- if (copy_from_user(&fd, (void __user *)value, sizeof(fd))) {
- ret = -EFAULT;
- break;
- }
+ if (copy_from_user(&fd, (void __user *)value, sizeof(fd)))
+ return -EFAULT;

return ffs_dmabuf_detach(file, fd);
}
@@ -1756,10 +1752,8 @@ static long ffs_epfile_ioctl(struct file *file, unsigned code,
{
struct usb_ffs_dmabuf_transfer_req req;

- if (copy_from_user(&req, (void __user *)value, sizeof(req))) {
- ret = -EFAULT;
- break;
- }
+ if (copy_from_user(&req, (void __user *)value, sizeof(req)))
+ return -EFAULT;

return ffs_dmabuf_transfer(file, &req);
}
diff --git a/drivers/usb/gadget/udc/bdc/bdc_core.c b/drivers/usb/gadget/udc/bdc/bdc_core.c
index 5149e2b7f0508..7fded329076cc 100644
--- a/drivers/usb/gadget/udc/bdc/bdc_core.c
+++ b/drivers/usb/gadget/udc/bdc/bdc_core.c
@@ -35,8 +35,8 @@ static int poll_oip(struct bdc *bdc, u32 usec)
u32 status;
int ret;

- ret = readl_poll_timeout(bdc->regs + BDC_BDCSC, status,
- (BDC_CSTS(status) != BDC_OIP), 10, usec);
+ ret = readl_poll_timeout_atomic(bdc->regs + BDC_BDCSC, status,
+ (BDC_CSTS(status) != BDC_OIP), 10, usec);
if (ret)
dev_err(bdc->dev, "operation timedout BDCSC: 0x%08x\n", status);
else
diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
index 3a14b6b72d8c6..114a80dd06fbb 100644
--- a/drivers/usb/gadget/udc/tegra-xudc.c
+++ b/drivers/usb/gadget/udc/tegra-xudc.c
@@ -3388,17 +3388,18 @@ static void tegra_xudc_device_params_init(struct tegra_xudc *xudc)
{
u32 val, imod;

+ val = xudc_readl(xudc, BLCG);
if (xudc->soc->has_ipfs) {
- val = xudc_readl(xudc, BLCG);
val |= BLCG_ALL;
val &= ~(BLCG_DFPCI | BLCG_UFPCI | BLCG_FE |
BLCG_COREPLL_PWRDN);
val |= BLCG_IOPLL_0_PWRDN;
val |= BLCG_IOPLL_1_PWRDN;
val |= BLCG_IOPLL_2_PWRDN;
-
- xudc_writel(xudc, val, BLCG);
+ } else {
+ val &= ~BLCG_COREPLL_PWRDN;
}
+ xudc_writel(xudc, val, BLCG);

if (xudc->soc->port_speed_quirk)
tegra_xudc_limit_port_speed(xudc);
@@ -3949,6 +3950,7 @@ static void tegra_xudc_remove(struct platform_device *pdev)
static int __maybe_unused tegra_xudc_powergate(struct tegra_xudc *xudc)
{
unsigned long flags;
+ u32 val;

dev_dbg(xudc->dev, "entering ELPG\n");

@@ -3961,6 +3963,10 @@ static int __maybe_unused tegra_xudc_powergate(struct tegra_xudc *xudc)

spin_unlock_irqrestore(&xudc->lock, flags);

+ val = xudc_readl(xudc, BLCG);
+ val |= BLCG_COREPLL_PWRDN;
+ xudc_writel(xudc, val, BLCG);
+
clk_bulk_disable_unprepare(xudc->soc->num_clks, xudc->clks);

regulator_bulk_disable(xudc->soc->num_supplies, xudc->supplies);
diff --git a/drivers/usb/typec/ucsi/psy.c b/drivers/usb/typec/ucsi/psy.c
index 7f176382b1a0f..c7d2fb611be4c 100644
--- a/drivers/usb/typec/ucsi/psy.c
+++ b/drivers/usb/typec/ucsi/psy.c
@@ -88,15 +88,20 @@ static int ucsi_psy_get_voltage_max(struct ucsi_connector *con,
union power_supply_propval *val)
{
u32 pdo;
+ int max_voltage = 0;

switch (UCSI_CONSTAT_PWR_OPMODE(con->status.flags)) {
case UCSI_CONSTAT_PWR_OPMODE_PD:
- if (con->num_pdos > 0) {
- pdo = con->src_pdos[con->num_pdos - 1];
- val->intval = pdo_fixed_voltage(pdo) * 1000;
- } else {
- val->intval = 0;
+ for (int i = 0; i < con->num_pdos; i++) {
+ int pdo_voltage = 0;
+
+ pdo = con->src_pdos[i];
+ if (pdo_type(pdo) == PDO_TYPE_FIXED)
+ pdo_voltage = pdo_fixed_voltage(pdo) * 1000;
+ max_voltage = (pdo_voltage > max_voltage) ? pdo_voltage
+ : max_voltage;
}
+ val->intval = max_voltage;
break;
case UCSI_CONSTAT_PWR_OPMODE_TYPEC3_0:
case UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5:
@@ -144,6 +149,7 @@ static int ucsi_psy_get_current_max(struct ucsi_connector *con,
union power_supply_propval *val)
{
u32 pdo;
+ int max_current = 0;

if (!(con->status.flags & UCSI_CONSTAT_CONNECTED)) {
val->intval = 0;
@@ -152,12 +158,16 @@ static int ucsi_psy_get_current_max(struct ucsi_connector *con,

switch (UCSI_CONSTAT_PWR_OPMODE(con->status.flags)) {
case UCSI_CONSTAT_PWR_OPMODE_PD:
- if (con->num_pdos > 0) {
- pdo = con->src_pdos[con->num_pdos - 1];
- val->intval = pdo_max_current(pdo) * 1000;
- } else {
- val->intval = 0;
+ for (int i = 0; i < con->num_pdos; i++) {
+ int pdo_current = 0;
+
+ pdo = con->src_pdos[i];
+ if (pdo_type(pdo) == PDO_TYPE_FIXED)
+ pdo_current = pdo_max_current(pdo) * 1000;
+ max_current = (pdo_current > max_current) ? pdo_current
+ : max_current;
}
+ val->intval = max_current;
break;
case UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5:
val->intval = UCSI_TYPEC_1_5_CURRENT * 1000;
diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
index 51b2485e874f4..6b8e9da15ed0b 100644
--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
+++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
@@ -3639,9 +3639,6 @@ static int mlx5_set_group_asid(struct vdpa_device *vdev, u32 group,
struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev);
int err = 0;

- if (group >= MLX5_VDPA_NUMVQ_GROUPS)
- return -EINVAL;
-
mvdev->mres.group2asid[group] = asid;

mutex_lock(&mvdev->mres.lock);
diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
index 8ffea8430f95f..481c52be50231 100644
--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
+++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
@@ -606,12 +606,6 @@ static int vdpasim_set_group_asid(struct vdpa_device *vdpa, unsigned int group,
struct vhost_iotlb *iommu;
int i;

- if (group > vdpasim->dev_attr.ngroups)
- return -EINVAL;
-
- if (asid >= vdpasim->dev_attr.nas)
- return -EINVAL;
-
iommu = &vdpasim->iommu[asid];

mutex_lock(&vdpasim->mutex);
diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
index dda8cb3262e0b..1a36b687e0085 100644
--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
+++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c
@@ -1139,8 +1139,7 @@ static void hisi_acc_vf_pci_aer_reset_done(struct pci_dev *pdev)
{
struct hisi_acc_vf_core_device *hisi_acc_vdev = hisi_acc_drvdata(pdev);

- if (hisi_acc_vdev->core_device.vdev.migration_flags !=
- VFIO_MIGRATION_STOP_COPY)
+ if (!hisi_acc_vdev->core_device.vdev.mig_ops)
return;

mutex_lock(&hisi_acc_vdev->state_mutex);
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index c7ea0b23924af..5f545b45078f8 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -590,6 +590,7 @@ EXPORT_SYMBOL_GPL(vfio_pci_core_enable);

void vfio_pci_core_disable(struct vfio_pci_core_device *vdev)
{
+ struct pci_dev *bridge;
struct pci_dev *pdev = vdev->pdev;
struct vfio_pci_dummy_resource *dummy_res, *tmp;
struct vfio_pci_ioeventfd *ioeventfd, *ioeventfd_tmp;
@@ -696,12 +697,20 @@ void vfio_pci_core_disable(struct vfio_pci_core_device *vdev)
* We can not use the "try" reset interface here, which will
* overwrite the previously restored configuration information.
*/
- if (vdev->reset_works && pci_dev_trylock(pdev)) {
- if (!__pci_reset_function_locked(pdev))
- vdev->needs_reset = false;
- pci_dev_unlock(pdev);
+ if (vdev->reset_works) {
+ bridge = pci_upstream_bridge(pdev);
+ if (bridge && !pci_dev_trylock(bridge))
+ goto out_restore_state;
+ if (pci_dev_trylock(pdev)) {
+ if (!__pci_reset_function_locked(pdev))
+ vdev->needs_reset = false;
+ pci_dev_unlock(pdev);
+ }
+ if (bridge)
+ pci_dev_unlock(bridge);
}

+out_restore_state:
pci_restore_state(pdev);
out:
pci_disable_device(pdev);
diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 5a49b5a6d4964..0ba5237980bbf 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -680,7 +680,7 @@ static long vhost_vdpa_vring_ioctl(struct vhost_vdpa *v, unsigned int cmd,
case VHOST_VDPA_SET_GROUP_ASID:
if (copy_from_user(&s, argp, sizeof(s)))
return -EFAULT;
- if (s.num >= vdpa->nas)
+ if (idx >= vdpa->ngroups || s.num >= vdpa->nas)
return -EINVAL;
if (!ops->set_group_asid)
return -EOPNOTSUPP;
@@ -1527,6 +1527,7 @@ static int vhost_vdpa_mmap(struct file *file, struct vm_area_struct *vma)
if (vma->vm_end - vma->vm_start != notify.size)
return -ENOTSUPP;

+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
vm_flags_set(vma, VM_IO | VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP);
vma->vm_ops = &vhost_vdpa_vm_ops;
return 0;
diff --git a/drivers/video/backlight/qcom-wled.c b/drivers/video/backlight/qcom-wled.c
index b19e5f73de8bb..0d55818f554ec 100644
--- a/drivers/video/backlight/qcom-wled.c
+++ b/drivers/video/backlight/qcom-wled.c
@@ -1244,6 +1244,15 @@ static const struct wled_var_cfg wled4_ovp_cfg = {
.size = ARRAY_SIZE(wled4_ovp_values),
};

+static const u32 pmi8994_wled_ovp_values[] = {
+ 31000, 29500, 19400, 17800,
+};
+
+static const struct wled_var_cfg pmi8994_wled_ovp_cfg = {
+ .values = pmi8994_wled_ovp_values,
+ .size = ARRAY_SIZE(pmi8994_wled_ovp_values),
+};
+
static inline u32 wled5_ovp_values_fn(u32 idx)
{
/*
@@ -1357,6 +1366,29 @@ static int wled_configure(struct wled *wled)
},
};

+ const struct wled_u32_opts pmi8994_wled_opts[] = {
+ {
+ .name = "qcom,current-boost-limit",
+ .val_ptr = &cfg->boost_i_limit,
+ .cfg = &wled4_boost_i_limit_cfg,
+ },
+ {
+ .name = "qcom,current-limit-microamp",
+ .val_ptr = &cfg->string_i_limit,
+ .cfg = &wled4_string_i_limit_cfg,
+ },
+ {
+ .name = "qcom,ovp-millivolt",
+ .val_ptr = &cfg->ovp,
+ .cfg = &pmi8994_wled_ovp_cfg,
+ },
+ {
+ .name = "qcom,switching-freq",
+ .val_ptr = &cfg->switch_freq,
+ .cfg = &wled3_switch_freq_cfg,
+ },
+ };
+
const struct wled_u32_opts wled5_opts[] = {
{
.name = "qcom,current-boost-limit",
@@ -1423,8 +1455,14 @@ static int wled_configure(struct wled *wled)
break;

case 4:
- u32_opts = wled4_opts;
- size = ARRAY_SIZE(wled4_opts);
+ if (of_device_is_compatible(dev->of_node, "qcom,pmi8950-wled") ||
+ of_device_is_compatible(dev->of_node, "qcom,pmi8994-wled")) {
+ u32_opts = pmi8994_wled_opts;
+ size = ARRAY_SIZE(pmi8994_wled_opts);
+ } else {
+ u32_opts = wled4_opts;
+ size = ARRAY_SIZE(wled4_opts);
+ }
*cfg = wled4_config_defaults;
wled->wled_set_brightness = wled4_set_brightness;
wled->wled_sync_toggle = wled3_sync_toggle;
diff --git a/drivers/video/fbdev/au1200fb.c b/drivers/video/fbdev/au1200fb.c
index ed770222660b5..685e629e7e164 100644
--- a/drivers/video/fbdev/au1200fb.c
+++ b/drivers/video/fbdev/au1200fb.c
@@ -1724,8 +1724,10 @@ static int au1200fb_drv_probe(struct platform_device *dev)

/* Now hook interrupt too */
irq = platform_get_irq(dev, 0);
- if (irq < 0)
- return irq;
+ if (irq < 0) {
+ ret = irq;
+ goto failed;
+ }

ret = request_irq(irq, au1200fb_handle_irq,
IRQF_SHARED, "lcd", (void *)dev);
diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
index e681066736dea..abe9e0ad9b43b 100644
--- a/drivers/video/fbdev/core/fbcon.c
+++ b/drivers/video/fbdev/core/fbcon.c
@@ -1043,7 +1043,8 @@ static void fbcon_init(struct vc_data *vc, bool init)
return;

if (!info->fbcon_par)
- con2fb_acquire_newinfo(vc, info, vc->vc_num);
+ if (con2fb_acquire_newinfo(vc, info, vc->vc_num))
+ return;

/* If we are not the first console on this
fb, copy the font from that console */
diff --git a/drivers/video/fbdev/core/fbcon.h b/drivers/video/fbdev/core/fbcon.h
index 4d97e6d8a16a2..7e21c8b336692 100644
--- a/drivers/video/fbdev/core/fbcon.h
+++ b/drivers/video/fbdev/core/fbcon.h
@@ -30,7 +30,6 @@ struct fbcon_display {
#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION
u_short scrollmode; /* Scroll Method, use fb_scrollmode() */
#endif
- u_short inverse; /* != 0 text black on white as default */
short yscroll; /* Hardware scrolling */
int vrows; /* number of virtual rows */
int cursor_shape;
diff --git a/drivers/video/fbdev/ffb.c b/drivers/video/fbdev/ffb.c
index 34b6abff9493e..da531b4cb4513 100644
--- a/drivers/video/fbdev/ffb.c
+++ b/drivers/video/fbdev/ffb.c
@@ -335,6 +335,9 @@ struct ffb_dac {
};

#define FFB_DAC_UCTRL 0x1001 /* User Control */
+#define FFB_DAC_UCTRL_OVENAB 0x00000008 /* Overlay Enable */
+#define FFB_DAC_UCTRL_WMODE 0x00000030 /* Window Mode */
+#define FFB_DAC_UCTRL_WM_COMB 0x00000000 /* Window Mode = Combined */
#define FFB_DAC_UCTRL_MANREV 0x00000f00 /* 4-bit Manufacturing Revision */
#define FFB_DAC_UCTRL_MANREV_SHIFT 8
#define FFB_DAC_TGEN 0x6000 /* Timing Generator */
@@ -425,7 +428,7 @@ static void ffb_switch_from_graph(struct ffb_par *par)
{
struct ffb_fbc __iomem *fbc = par->fbc;
struct ffb_dac __iomem *dac = par->dac;
- unsigned long flags;
+ unsigned long flags, uctrl;

spin_lock_irqsave(&par->lock, flags);
FFBWait(par);
@@ -450,6 +453,15 @@ static void ffb_switch_from_graph(struct ffb_par *par)
upa_writel((FFB_DAC_CUR_CTRL_P0 |
FFB_DAC_CUR_CTRL_P1), &dac->value2);

+ /* Disable overlay and window modes. */
+ upa_writel(FFB_DAC_UCTRL, &dac->type);
+ uctrl = upa_readl(&dac->value);
+ uctrl &= ~FFB_DAC_UCTRL_WMODE;
+ uctrl |= FFB_DAC_UCTRL_WM_COMB;
+ uctrl &= ~FFB_DAC_UCTRL_OVENAB;
+ upa_writel(FFB_DAC_UCTRL, &dac->type);
+ upa_writel(uctrl, &dac->value);
+
spin_unlock_irqrestore(&par->lock, flags);
}

diff --git a/drivers/video/fbdev/vt8500lcdfb.c b/drivers/video/fbdev/vt8500lcdfb.c
index b08a6fdc53fd2..85c7a99a7d648 100644
--- a/drivers/video/fbdev/vt8500lcdfb.c
+++ b/drivers/video/fbdev/vt8500lcdfb.c
@@ -369,7 +369,7 @@ static int vt8500lcd_probe(struct platform_device *pdev)
if (fbi->palette_cpu == NULL) {
dev_err(&pdev->dev, "Failed to allocate palette buffer\n");
ret = -ENOMEM;
- goto failed_free_io;
+ goto failed_free_mem_virt;
}

irq = platform_get_irq(pdev, 0);
@@ -432,6 +432,9 @@ static int vt8500lcd_probe(struct platform_device *pdev)
failed_free_palette:
dma_free_coherent(&pdev->dev, fbi->palette_size,
fbi->palette_cpu, fbi->palette_phys);
+failed_free_mem_virt:
+ dma_free_coherent(&pdev->dev, fbi->fb.fix.smem_len,
+ fbi->fb.screen_buffer, fbi->fb.fix.smem_start);
failed_free_io:
iounmap(fbi->regbase);
failed_free_res:
diff --git a/drivers/video/of_display_timing.c b/drivers/video/of_display_timing.c
index bebd371c6b93e..a6ec392253c3e 100644
--- a/drivers/video/of_display_timing.c
+++ b/drivers/video/of_display_timing.c
@@ -181,7 +181,7 @@ struct display_timings *of_get_display_timings(const struct device_node *np)
if (disp->num_timings == 0) {
/* should never happen, as entry was already found above */
pr_err("%pOF: no timings specified\n", np);
- goto entryfail;
+ goto timingfail;
}

disp->timings = kcalloc(disp->num_timings,
@@ -189,13 +189,13 @@ struct display_timings *of_get_display_timings(const struct device_node *np)
GFP_KERNEL);
if (!disp->timings) {
pr_err("%pOF: could not allocate timings array\n", np);
- goto entryfail;
+ goto timingfail;
}

disp->num_timings = 0;
disp->native_mode = 0;

- for_each_child_of_node(timings_np, entry) {
+ for_each_child_of_node_scoped(timings_np, child) {
struct display_timing *dt;
int r;

@@ -206,7 +206,7 @@ struct display_timings *of_get_display_timings(const struct device_node *np)
goto timingfail;
}

- r = of_parse_display_timing(entry, dt);
+ r = of_parse_display_timing(child, dt);
if (r) {
/*
* to not encourage wrong devicetrees, fail in case of
@@ -218,7 +218,7 @@ struct display_timings *of_get_display_timings(const struct device_node *np)
goto timingfail;
}

- if (native_mode == entry)
+ if (native_mode == child)
disp->native_mode = disp->num_timings;

disp->timings[disp->num_timings] = dt;
diff --git a/drivers/watchdog/imx7ulp_wdt.c b/drivers/watchdog/imx7ulp_wdt.c
index 0f13a30533574..03479110453ce 100644
--- a/drivers/watchdog/imx7ulp_wdt.c
+++ b/drivers/watchdog/imx7ulp_wdt.c
@@ -346,6 +346,7 @@ static int imx7ulp_wdt_probe(struct platform_device *pdev)
watchdog_stop_on_reboot(wdog);
watchdog_stop_on_unregister(wdog);
watchdog_set_drvdata(wdog, imx7ulp_wdt);
+ watchdog_set_nowayout(wdog, nowayout);

imx7ulp_wdt->hw = of_device_get_match_data(dev);
ret = imx7ulp_wdt_init(imx7ulp_wdt, wdog->timeout * imx7ulp_wdt->hw->wdog_clock_rate);
diff --git a/drivers/watchdog/it87_wdt.c b/drivers/watchdog/it87_wdt.c
index 1a5a0a2c3f2e3..8c42cb41fa230 100644
--- a/drivers/watchdog/it87_wdt.c
+++ b/drivers/watchdog/it87_wdt.c
@@ -186,6 +186,12 @@ static void _wdt_update_timeout(unsigned int t)
superio_outb(t >> 8, WDTVALMSB);
}

+/* Internal function, should be called after superio_select(GPIO) */
+static bool _wdt_running(void)
+{
+ return superio_inb(WDTVALLSB) || (max_units > 255 && superio_inb(WDTVALMSB));
+}
+
static int wdt_update_timeout(unsigned int t)
{
int ret;
@@ -372,6 +378,12 @@ static int __init it87_wdt_init(void)
}
}

+ /* wdt already left running by firmware? */
+ if (_wdt_running()) {
+ pr_info("Left running by firmware.\n");
+ set_bit(WDOG_HW_RUNNING, &wdt_dev.status);
+ }
+
superio_exit();

if (timeout < 1 || timeout > max_units * 60) {
diff --git a/drivers/watchdog/starfive-wdt.c b/drivers/watchdog/starfive-wdt.c
index 763b11b6f402c..8244f282bee86 100644
--- a/drivers/watchdog/starfive-wdt.c
+++ b/drivers/watchdog/starfive-wdt.c
@@ -446,7 +446,7 @@ static int starfive_wdt_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, wdt);
pm_runtime_enable(&pdev->dev);
if (pm_runtime_enabled(&pdev->dev)) {
- ret = pm_runtime_get_sync(&pdev->dev);
+ ret = pm_runtime_resume_and_get(&pdev->dev);
if (ret < 0)
return ret;
} else {
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index e47bb157aa090..88511187458a9 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -720,6 +720,7 @@ static int __init balloon_add_regions(void)
static int __init balloon_init(void)
{
struct task_struct *task;
+ unsigned long current_pages;
int rc;

if (!xen_domain())
@@ -727,12 +728,18 @@ static int __init balloon_init(void)

pr_info("Initialising balloon driver\n");

- if (xen_released_pages >= get_num_physpages()) {
- WARN(1, "Released pages underflow current target");
- return -ERANGE;
+ if (xen_pv_domain()) {
+ if (xen_released_pages >= xen_start_info->nr_pages)
+ goto underflow;
+ current_pages = min(xen_start_info->nr_pages -
+ xen_released_pages, max_pfn);
+ } else {
+ if (xen_unpopulated_pages >= get_num_physpages())
+ goto underflow;
+ current_pages = get_num_physpages() - xen_unpopulated_pages;
}

- balloon_stats.current_pages = get_num_physpages() - xen_released_pages;
+ balloon_stats.current_pages = current_pages;
balloon_stats.target_pages = balloon_stats.current_pages;
balloon_stats.balloon_low = 0;
balloon_stats.balloon_high = 0;
@@ -763,6 +770,10 @@ static int __init balloon_init(void)
xen_balloon_init();

return 0;
+
+ underflow:
+ WARN(1, "Released pages underflow current target");
+ return -ERANGE;
}
subsys_initcall(balloon_init);

diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c
index 29257d2639dbf..43a918c498c6c 100644
--- a/drivers/xen/grant-dma-ops.c
+++ b/drivers/xen/grant-dma-ops.c
@@ -362,7 +362,8 @@ static int xen_grant_init_backend_domid(struct device *dev,
if (np) {
ret = xen_dt_grant_init_backend_domid(dev, np, backend_domid);
of_node_put(np);
- } else if (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_pv_domain()) {
+ } else if (!xen_initial_domain() &&
+ (IS_ENABLED(CONFIG_XEN_VIRTIO_FORCE_GRANT) || xen_pv_domain())) {
dev_info(dev, "Using dom0 as backend\n");
*backend_domid = 0;
ret = 0;
diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
index a39f2d36dd9cf..ae46291e99a9d 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -18,6 +18,9 @@ static unsigned int list_count;

static struct resource *target_resource;

+/* Pages to subtract from the memory count when setting balloon target. */
+unsigned long xen_unpopulated_pages __initdata;
+
/*
* If arch is not happy with system "iomem_resource" being used for
* the region allocation it can provide it's own view by creating specific
diff --git a/drivers/xen/xenbus/xenbus_probe_frontend.c b/drivers/xen/xenbus/xenbus_probe_frontend.c
index fcb335bb7b187..1fdf5be193430 100644
--- a/drivers/xen/xenbus/xenbus_probe_frontend.c
+++ b/drivers/xen/xenbus/xenbus_probe_frontend.c
@@ -148,11 +148,9 @@ static void xenbus_frontend_dev_shutdown(struct device *_dev)
}

static const struct dev_pm_ops xenbus_pm_ops = {
- .suspend = xenbus_dev_suspend,
- .resume = xenbus_frontend_dev_resume,
.freeze = xenbus_dev_suspend,
.thaw = xenbus_dev_cancel,
- .restore = xenbus_dev_resume,
+ .restore = xenbus_frontend_dev_resume,
};

static struct xen_bus_type xenbus_frontend = {
diff --git a/fs/btrfs/block-rsv.c b/fs/btrfs/block-rsv.c
index a07b9594dc706..a578bd3b32bb0 100644
--- a/fs/btrfs/block-rsv.c
+++ b/fs/btrfs/block-rsv.c
@@ -280,10 +280,11 @@ u64 btrfs_block_rsv_release(struct btrfs_fs_info *fs_info,
struct btrfs_block_rsv *target = NULL;

/*
- * If we are a delayed block reserve then push to the global rsv,
- * otherwise dump into the global delayed reserve if it is not full.
+ * If we are a delayed refs block reserve then push to the global
+ * reserve, otherwise dump into the global delayed refs reserve if it is
+ * not full.
*/
- if (block_rsv->type == BTRFS_BLOCK_RSV_DELOPS)
+ if (block_rsv->type == BTRFS_BLOCK_RSV_DELREFS)
target = global_rsv;
else if (block_rsv != global_rsv && !btrfs_block_rsv_full(delayed_rsv))
target = delayed_rsv;
diff --git a/fs/btrfs/direct-io.c b/fs/btrfs/direct-io.c
index 71984d7db839b..fb414643f223d 100644
--- a/fs/btrfs/direct-io.c
+++ b/fs/btrfs/direct-io.c
@@ -803,6 +803,8 @@ ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
ssize_t ret;
unsigned int ilock_flags = 0;
struct iomap_dio *dio;
+ const u64 data_profile = btrfs_data_alloc_profile(fs_info) &
+ BTRFS_BLOCK_GROUP_PROFILE_MASK;

if (iocb->ki_flags & IOCB_NOWAIT)
ilock_flags |= BTRFS_ILOCK_TRY;
@@ -816,6 +818,16 @@ ssize_t btrfs_direct_write(struct kiocb *iocb, struct iov_iter *from)
if (iocb->ki_pos + iov_iter_count(from) <= i_size_read(inode) && IS_NOSEC(inode))
ilock_flags |= BTRFS_ILOCK_SHARED;

+ /*
+ * If our data profile has duplication (either extra mirrors or RAID56),
+ * we can not trust the direct IO buffer, the content may change during
+ * writeback and cause different contents written to different mirrors.
+ *
+ * Thus only RAID0 and SINGLE can go true zero-copy direct IO.
+ */
+ if (data_profile != BTRFS_BLOCK_GROUP_RAID0 && data_profile != 0)
+ goto buffered;
+
relock:
ret = btrfs_inode_lock(BTRFS_I(inode), ilock_flags);
if (ret < 0)
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 7bab2512468d5..c052c8df05fb4 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -6559,6 +6559,10 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
range->minlen);

trimmed += group_trimmed;
+ if (ret == -ERESTARTSYS || ret == -EINTR) {
+ btrfs_put_block_group(cache);
+ break;
+ }
if (ret) {
bg_failed++;
bg_ret = ret;
@@ -6572,6 +6576,9 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
"failed to trim %llu block group(s), last error %d",
bg_failed, bg_ret);

+ if (ret == -ERESTARTSYS || ret == -EINTR)
+ return ret;
+
mutex_lock(&fs_devices->device_list_mutex);
list_for_each_entry(device, &fs_devices->devices, dev_list) {
if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state))
@@ -6580,10 +6587,12 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
ret = btrfs_trim_free_extents(device, &group_trimmed);

trimmed += group_trimmed;
+ if (ret == -ERESTARTSYS || ret == -EINTR)
+ break;
if (ret) {
dev_failed++;
dev_ret = ret;
- break;
+ continue;
}
}
mutex_unlock(&fs_devices->device_list_mutex);
@@ -6593,6 +6602,8 @@ int btrfs_trim_fs(struct btrfs_fs_info *fs_info, struct fstrim_range *range)
"failed to trim %llu device(s), last error %d",
dev_failed, dev_ret);
range->len = trimmed;
+ if (ret == -ERESTARTSYS || ret == -EINTR)
+ return ret;
if (bg_ret)
return bg_ret;
return dev_ret;
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 029017afaf344..71ccba22752cb 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1199,11 +1199,14 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info,
}
if (ret > 0) {
/*
- * Shouldn't happen, but in case it does we
- * don't need to do the btrfs_next_item, just
- * continue.
+ * Shouldn't happen because the key should still
+ * be there (return 0), but in case it does it
+ * means we have reached the end of the tree -
+ * there are no more leaves with items that have
+ * a key greater than or equals to @found_key,
+ * so just stop the search loop.
*/
- continue;
+ break;
}
}
ret = btrfs_next_item(tree_root, path);
@@ -1673,8 +1676,10 @@ static int __del_qgroup_relation(struct btrfs_trans_handle *trans, u64 src,
if (ret < 0 && ret != -ENOENT)
goto out;
ret2 = del_qgroup_relation_item(trans, dst, src);
- if (ret2 < 0 && ret2 != -ENOENT)
+ if (ret2 < 0 && ret2 != -ENOENT) {
+ ret = ret2;
goto out;
+ }

/* At least one deletion succeeded, return 0 */
if (!ret || !ret2)
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index 7371a3c0bdede..0ee6b40af65ee 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -726,7 +726,7 @@ start_transaction(struct btrfs_root *root, unsigned int num_items,

h->type = type;
INIT_LIST_HEAD(&h->new_bgs);
- btrfs_init_metadata_block_rsv(fs_info, &h->delayed_rsv, BTRFS_BLOCK_RSV_DELOPS);
+ btrfs_init_metadata_block_rsv(fs_info, &h->delayed_rsv, BTRFS_BLOCK_RSV_DELREFS);

smp_mb();
if (cur_trans->state >= TRANS_STATE_COMMIT_START &&
@@ -2487,13 +2487,6 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans)
list_add_tail(&fs_info->chunk_root->dirty_list,
&cur_trans->switch_commits);

- if (btrfs_fs_incompat(fs_info, EXTENT_TREE_V2)) {
- btrfs_set_root_node(&fs_info->block_group_root->root_item,
- fs_info->block_group_root->node);
- list_add_tail(&fs_info->block_group_root->dirty_list,
- &cur_trans->switch_commits);
- }
-
switch_commit_roots(trans);

ASSERT(list_empty(&cur_trans->dirty_bgs));
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 9c6e96f630132..24b6a7f83f935 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -4117,8 +4117,14 @@ static int __btrfs_balance(struct btrfs_fs_info *fs_info)
* this shouldn't happen, it means the last relocate
* failed
*/
- if (ret == 0)
- BUG(); /* FIXME break ? */
+ if (unlikely(ret == 0)) {
+ btrfs_err(fs_info,
+ "unexpected exact match of CHUNK_ITEM in chunk tree, offset 0x%llx",
+ key.offset);
+ mutex_unlock(&fs_info->reclaim_bgs_lock);
+ ret = -EUCLEAN;
+ goto error;
+ }

ret = btrfs_previous_item(chunk_root, path, 0,
BTRFS_CHUNK_ITEM_KEY);
diff --git a/fs/buffer.c b/fs/buffer.c
index 79c19ffa44015..c5aaf1419d341 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -2968,6 +2968,10 @@ bool try_to_free_buffers(struct folio *folio)
if (folio_test_writeback(folio))
return false;

+ /* Misconfigured folio check */
+ if (WARN_ON_ONCE(!folio_buffers(folio)))
+ return true;
+
if (mapping == NULL) { /* can this still happen? */
ret = drop_buffers(folio, &buffers_to_free);
goto out;
diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
index 6828a2cff02c8..5addf09cb6e6e 100644
--- a/fs/ceph/addr.c
+++ b/fs/ceph/addr.c
@@ -1867,6 +1867,7 @@ int ceph_uninline_data(struct file *file)
struct ceph_osd_request *req = NULL;
struct ceph_cap_flush *prealloc_cf = NULL;
struct folio *folio = NULL;
+ struct ceph_snap_context *snapc = NULL;
u64 inline_version = CEPH_INLINE_NONE;
struct page *pages[1];
int err = 0;
@@ -1894,6 +1895,24 @@ int ceph_uninline_data(struct file *file)
if (inline_version == 1) /* initial version, no data */
goto out_uninline;

+ down_read(&fsc->mdsc->snap_rwsem);
+ spin_lock(&ci->i_ceph_lock);
+ if (__ceph_have_pending_cap_snap(ci)) {
+ struct ceph_cap_snap *capsnap =
+ list_last_entry(&ci->i_cap_snaps,
+ struct ceph_cap_snap,
+ ci_item);
+ snapc = ceph_get_snap_context(capsnap->context);
+ } else {
+ if (!ci->i_head_snapc) {
+ ci->i_head_snapc = ceph_get_snap_context(
+ ci->i_snap_realm->cached_context);
+ }
+ snapc = ceph_get_snap_context(ci->i_head_snapc);
+ }
+ spin_unlock(&ci->i_ceph_lock);
+ up_read(&fsc->mdsc->snap_rwsem);
+
folio = read_mapping_folio(inode->i_mapping, 0, file);
if (IS_ERR(folio)) {
err = PTR_ERR(folio);
@@ -1909,7 +1928,7 @@ int ceph_uninline_data(struct file *file)
req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout,
ceph_vino(inode), 0, &len, 0, 1,
CEPH_OSD_OP_CREATE, CEPH_OSD_FLAG_WRITE,
- NULL, 0, 0, false);
+ snapc, 0, 0, false);
if (IS_ERR(req)) {
err = PTR_ERR(req);
goto out_unlock;
@@ -1925,7 +1944,7 @@ int ceph_uninline_data(struct file *file)
req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout,
ceph_vino(inode), 0, &len, 1, 3,
CEPH_OSD_OP_WRITE, CEPH_OSD_FLAG_WRITE,
- NULL, ci->i_truncate_seq,
+ snapc, ci->i_truncate_seq,
ci->i_truncate_size, false);
if (IS_ERR(req)) {
err = PTR_ERR(req);
@@ -1988,6 +2007,7 @@ int ceph_uninline_data(struct file *file)
folio_put(folio);
}
out:
+ ceph_put_snap_context(snapc);
ceph_free_cap_flush(prealloc_cf);
doutc(cl, "%llx.%llx inline_version %llu = %d\n",
ceph_vinop(inode), inline_version, err);
diff --git a/fs/ceph/file.c b/fs/ceph/file.c
index 147126d2fa0c7..9e3122db3d780 100644
--- a/fs/ceph/file.c
+++ b/fs/ceph/file.c
@@ -2567,6 +2567,7 @@ static int ceph_zero_partial_object(struct inode *inode,
struct ceph_inode_info *ci = ceph_inode(inode);
struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
struct ceph_osd_request *req;
+ struct ceph_snap_context *snapc;
int ret = 0;
loff_t zero = 0;
int op;
@@ -2581,12 +2582,25 @@ static int ceph_zero_partial_object(struct inode *inode,
op = CEPH_OSD_OP_ZERO;
}

+ spin_lock(&ci->i_ceph_lock);
+ if (__ceph_have_pending_cap_snap(ci)) {
+ struct ceph_cap_snap *capsnap =
+ list_last_entry(&ci->i_cap_snaps,
+ struct ceph_cap_snap,
+ ci_item);
+ snapc = ceph_get_snap_context(capsnap->context);
+ } else {
+ BUG_ON(!ci->i_head_snapc);
+ snapc = ceph_get_snap_context(ci->i_head_snapc);
+ }
+ spin_unlock(&ci->i_ceph_lock);
+
req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout,
ceph_vino(inode),
offset, length,
0, 1, op,
CEPH_OSD_FLAG_WRITE,
- NULL, 0, 0, false);
+ snapc, 0, 0, false);
if (IS_ERR(req)) {
ret = PTR_ERR(req);
goto out;
@@ -2600,6 +2614,7 @@ static int ceph_zero_partial_object(struct inode *inode,
ceph_osdc_put_request(req);

out:
+ ceph_put_snap_context(snapc);
return ret;
}

diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index 0ad496ceb638d..195e4ce22a824 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -626,7 +626,8 @@ int dlm_search_rsb_tree(struct rhashtable *rhash, const void *name, int len,
struct dlm_rsb **r_ret)
{
char key[DLM_RESNAME_MAXLEN] = {};
-
+ if (len > DLM_RESNAME_MAXLEN)
+ return -EINVAL;
memcpy(key, name, len);
*r_ret = rhashtable_lookup_fast(rhash, &key, dlm_rhash_rsb_params);
if (*r_ret)
diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
index 990defcf93043..2c7f066daacdd 100644
--- a/fs/erofs/fileio.c
+++ b/fs/erofs/fileio.c
@@ -25,23 +25,19 @@ static void erofs_fileio_ki_complete(struct kiocb *iocb, long ret)
container_of(iocb, struct erofs_fileio_rq, iocb);
struct folio_iter fi;

- if (ret > 0) {
- if (ret != rq->bio.bi_iter.bi_size) {
- bio_advance(&rq->bio, ret);
- zero_fill_bio(&rq->bio);
- }
- ret = 0;
+ if (ret >= 0 && ret != rq->bio.bi_iter.bi_size) {
+ bio_advance(&rq->bio, ret);
+ zero_fill_bio(&rq->bio);
}
- if (rq->bio.bi_end_io) {
- if (ret < 0 && !rq->bio.bi_status)
- rq->bio.bi_status = errno_to_blk_status(ret);
- rq->bio.bi_end_io(&rq->bio);
- } else {
+ if (!rq->bio.bi_end_io) {
bio_for_each_folio_all(fi, &rq->bio) {
DBG_BUGON(folio_test_uptodate(fi.folio));
- erofs_onlinefolio_end(fi.folio, ret, false);
+ erofs_onlinefolio_end(fi.folio, ret < 0, false);
}
+ } else if (ret < 0 && !rq->bio.bi_status) {
+ rq->bio.bi_status = errno_to_blk_status(ret);
}
+ bio_endio(&rq->bio);
bio_uninit(&rq->bio);
if (refcount_dec_and_test(&rq->ref))
kfree(rq);
@@ -50,7 +46,7 @@ static void erofs_fileio_ki_complete(struct kiocb *iocb, long ret)
static void erofs_fileio_rq_submit(struct erofs_fileio_rq *rq)
{
struct iov_iter iter;
- int ret;
+ ssize_t ret;

if (!rq)
return;
diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
index ce3d8737df85d..20e2cb18ed1d4 100644
--- a/fs/erofs/fscache.c
+++ b/fs/erofs/fscache.c
@@ -187,7 +187,7 @@ static void erofs_fscache_bio_endio(void *priv,

if (IS_ERR_VALUE(transferred_or_error))
io->bio.bi_status = errno_to_blk_status(transferred_or_error);
- io->bio.bi_end_io(&io->bio);
+ bio_endio(&io->bio);
BUILD_BUG_ON(offsetof(struct erofs_fscache_bio, io) != 0);
erofs_fscache_io_put(&io->io);
}
@@ -218,7 +218,7 @@ void erofs_fscache_submit_bio(struct bio *bio)
if (!ret)
return;
bio->bi_status = errno_to_blk_status(ret);
- bio->bi_end_io(bio);
+ bio_endio(bio);
}

static int erofs_fscache_meta_read_folio(struct file *data, struct folio *folio)
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
index 2f9c3cd4f26cc..05d4a63300867 100644
--- a/fs/ext4/extents.c
+++ b/fs/ext4/extents.c
@@ -43,8 +43,13 @@
#define EXT4_EXT_MARK_UNWRIT1 0x2 /* mark first half unwritten */
#define EXT4_EXT_MARK_UNWRIT2 0x4 /* mark second half unwritten */

-#define EXT4_EXT_DATA_VALID1 0x8 /* first half contains valid data */
-#define EXT4_EXT_DATA_VALID2 0x10 /* second half contains valid data */
+/* first half contains valid data */
+#define EXT4_EXT_DATA_ENTIRE_VALID1 0x8 /* has entirely valid data */
+#define EXT4_EXT_DATA_PARTIAL_VALID1 0x10 /* has partially valid data */
+#define EXT4_EXT_DATA_VALID1 (EXT4_EXT_DATA_ENTIRE_VALID1 | \
+ EXT4_EXT_DATA_PARTIAL_VALID1)
+
+#define EXT4_EXT_DATA_VALID2 0x20 /* second half contains valid data */

static __le32 ext4_extent_block_csum(struct inode *inode,
struct ext4_extent_header *eh)
@@ -3188,8 +3193,12 @@ static struct ext4_ext_path *ext4_split_extent_at(handle_t *handle,
unsigned int ee_len, depth;
int err = 0;

- BUG_ON((split_flag & (EXT4_EXT_DATA_VALID1 | EXT4_EXT_DATA_VALID2)) ==
- (EXT4_EXT_DATA_VALID1 | EXT4_EXT_DATA_VALID2));
+ BUG_ON((split_flag & EXT4_EXT_DATA_VALID1) == EXT4_EXT_DATA_VALID1);
+ BUG_ON((split_flag & EXT4_EXT_DATA_VALID1) &&
+ (split_flag & EXT4_EXT_DATA_VALID2));
+
+ /* Do not cache extents that are in the process of being modified. */
+ flags |= EXT4_EX_NOCACHE;

ext_debug(inode, "logical block %llu\n", (unsigned long long)split);

@@ -3256,7 +3265,7 @@ static struct ext4_ext_path *ext4_split_extent_at(handle_t *handle,

err = PTR_ERR(path);
if (err != -ENOSPC && err != -EDQUOT && err != -ENOMEM)
- return path;
+ goto out_path;

/*
* Get a new path to try to zeroout or fix the extent length.
@@ -3270,7 +3279,7 @@ static struct ext4_ext_path *ext4_split_extent_at(handle_t *handle,
if (IS_ERR(path)) {
EXT4_ERROR_INODE(inode, "Failed split extent on %u, err %ld",
split, PTR_ERR(path));
- return path;
+ goto out_path;
}
depth = ext_depth(inode);
ex = path[depth].p_ext;
@@ -3302,6 +3311,23 @@ static struct ext4_ext_path *ext4_split_extent_at(handle_t *handle,
}

if (!err) {
+ /*
+ * The first half contains partially valid data, the
+ * splitting of this extent has not been completed, fix
+ * extent length and ext4_split_extent() split will the
+ * first half again.
+ */
+ if (split_flag & EXT4_EXT_DATA_PARTIAL_VALID1) {
+ /*
+ * Drop extent cache to prevent stale unwritten
+ * extents remaining after zeroing out.
+ */
+ ext4_es_remove_extent(inode,
+ le32_to_cpu(zero_ex.ee_block),
+ ext4_ext_get_actual_len(&zero_ex));
+ goto fix_extent_len;
+ }
+
/* update the extent length and mark as initialized */
ex->ee_len = cpu_to_le16(ee_len);
ext4_ext_try_to_merge(handle, inode, path, ex);
@@ -3330,6 +3356,10 @@ static struct ext4_ext_path *ext4_split_extent_at(handle_t *handle,
ext4_free_ext_path(path);
path = ERR_PTR(err);
}
+out_path:
+ if (IS_ERR(path))
+ /* Remove all remaining potentially stale extents. */
+ ext4_es_remove_extent(inode, ee_block, ee_len);
ext4_ext_show_leaf(inode, path);
return path;
}
@@ -3364,6 +3394,9 @@ static struct ext4_ext_path *ext4_split_extent(handle_t *handle,
ee_len = ext4_ext_get_actual_len(ex);
unwritten = ext4_ext_is_unwritten(ex);

+ /* Do not cache extents that are in the process of being modified. */
+ flags |= EXT4_EX_NOCACHE;
+
if (map->m_lblk + map->m_len < ee_block + ee_len) {
split_flag1 = split_flag & EXT4_EXT_MAY_ZEROOUT;
flags1 = flags | EXT4_GET_BLOCKS_PRE_IO;
@@ -3371,7 +3404,9 @@ static struct ext4_ext_path *ext4_split_extent(handle_t *handle,
split_flag1 |= EXT4_EXT_MARK_UNWRIT1 |
EXT4_EXT_MARK_UNWRIT2;
if (split_flag & EXT4_EXT_DATA_VALID2)
- split_flag1 |= EXT4_EXT_DATA_VALID1;
+ split_flag1 |= map->m_lblk > ee_block ?
+ EXT4_EXT_DATA_PARTIAL_VALID1 :
+ EXT4_EXT_DATA_ENTIRE_VALID1;
path = ext4_split_extent_at(handle, inode, path,
map->m_lblk + map->m_len, split_flag1, flags1);
if (IS_ERR(path))
@@ -3730,7 +3765,7 @@ static struct ext4_ext_path *ext4_split_convert_extents(handle_t *handle,

/* Convert to unwritten */
if (flags & EXT4_GET_BLOCKS_CONVERT_UNWRITTEN) {
- split_flag |= EXT4_EXT_DATA_VALID1;
+ split_flag |= EXT4_EXT_DATA_ENTIRE_VALID1;
/* Convert to initialized */
} else if (flags & EXT4_GET_BLOCKS_CONVERT) {
split_flag |= ee_block + ee_len <= eof_block ?
@@ -3768,6 +3803,8 @@ ext4_convert_unwritten_extents_endio(handle_t *handle, struct inode *inode,
* illegal.
*/
if (ee_block != map->m_lblk || ee_len > map->m_len) {
+ int flags = EXT4_GET_BLOCKS_CONVERT |
+ EXT4_GET_BLOCKS_METADATA_NOFAIL;
#ifdef CONFIG_EXT4_DEBUG
ext4_warning(inode->i_sb, "Inode (%ld) finished: extent logical block %llu,"
" len %u; IO logical block %llu, len %u",
@@ -3775,7 +3812,7 @@ ext4_convert_unwritten_extents_endio(handle_t *handle, struct inode *inode,
(unsigned long long)map->m_lblk, map->m_len);
#endif
path = ext4_split_convert_extents(handle, inode, map, path,
- EXT4_GET_BLOCKS_CONVERT, NULL);
+ flags, NULL);
if (IS_ERR(path))
return path;

@@ -3814,6 +3851,7 @@ static struct ext4_ext_path *
convert_initialized_extent(handle_t *handle, struct inode *inode,
struct ext4_map_blocks *map,
struct ext4_ext_path *path,
+ int flags,
unsigned int *allocated)
{
struct ext4_extent *ex;
@@ -3839,11 +3877,11 @@ convert_initialized_extent(handle_t *handle, struct inode *inode,

if (ee_block != map->m_lblk || ee_len > map->m_len) {
path = ext4_split_convert_extents(handle, inode, map, path,
- EXT4_GET_BLOCKS_CONVERT_UNWRITTEN, NULL);
+ flags, NULL);
if (IS_ERR(path))
return path;

- path = ext4_find_extent(inode, map->m_lblk, path, 0);
+ path = ext4_find_extent(inode, map->m_lblk, path, flags);
if (IS_ERR(path))
return path;
depth = ext_depth(inode);
@@ -4255,7 +4293,7 @@ int ext4_ext_map_blocks(handle_t *handle, struct inode *inode,
if ((!ext4_ext_is_unwritten(ex)) &&
(flags & EXT4_GET_BLOCKS_CONVERT_UNWRITTEN)) {
path = convert_initialized_extent(handle,
- inode, map, path, &allocated);
+ inode, map, path, flags, &allocated);
if (IS_ERR(path))
err = PTR_ERR(path);
goto out;
@@ -5214,7 +5252,8 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
if (!extent) {
EXT4_ERROR_INODE(inode, "unexpected hole at %lu",
(unsigned long) *iterator);
- return -EFSCORRUPTED;
+ ret = -EFSCORRUPTED;
+ goto out;
}
if (SHIFT == SHIFT_LEFT && *iterator >
le32_to_cpu(extent->ee_block)) {
diff --git a/fs/ext4/ioctl.c b/fs/ext4/ioctl.c
index 1c77400bd88e1..99d185d00f377 100644
--- a/fs/ext4/ioctl.c
+++ b/fs/ext4/ioctl.c
@@ -963,6 +963,7 @@ static long ext4_ioctl_group_add(struct file *file,

err = ext4_group_add(sb, input);
if (EXT4_SB(sb)->s_journal) {
+ ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_RESIZE, NULL);
jbd2_journal_lock_updates(EXT4_SB(sb)->s_journal);
err2 = jbd2_journal_flush(EXT4_SB(sb)->s_journal, 0);
jbd2_journal_unlock_updates(EXT4_SB(sb)->s_journal);
@@ -1314,6 +1315,8 @@ static long __ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)

err = ext4_group_extend(sb, EXT4_SB(sb)->s_es, n_blocks_count);
if (EXT4_SB(sb)->s_journal) {
+ ext4_fc_mark_ineligible(sb, EXT4_FC_REASON_RESIZE,
+ NULL);
jbd2_journal_lock_updates(EXT4_SB(sb)->s_journal);
err2 = jbd2_journal_flush(EXT4_SB(sb)->s_journal, 0);
jbd2_journal_unlock_updates(EXT4_SB(sb)->s_journal);
diff --git a/fs/ext4/mballoc-test.c b/fs/ext4/mballoc-test.c
index f13db95284d9e..8eacba6e780ad 100644
--- a/fs/ext4/mballoc-test.c
+++ b/fs/ext4/mballoc-test.c
@@ -567,7 +567,7 @@ test_mark_diskspace_used_range(struct kunit *test,

bitmap = mbt_ctx_bitmap(sb, TEST_GOAL_GROUP);
memset(bitmap, 0, sb->s_blocksize);
- ret = ext4_mb_mark_diskspace_used(ac, NULL, 0);
+ ret = ext4_mb_mark_diskspace_used(ac, NULL);
KUNIT_ASSERT_EQ(test, ret, 0);

max = EXT4_CLUSTERS_PER_GROUP(sb);
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index ed2d63b2e91dc..edfffd15b2952 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -1091,8 +1091,6 @@ static inline int should_optimize_scan(struct ext4_allocation_context *ac)
return 0;
if (ac->ac_criteria >= CR_GOAL_LEN_SLOW)
return 0;
- if (!ext4_test_inode_flag(ac->ac_inode, EXT4_INODE_EXTENTS))
- return 0;
return 1;
}

@@ -1638,16 +1636,17 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,

/* Avoid locking the folio in the fast path ... */
folio = __filemap_get_folio(inode->i_mapping, pnum, FGP_ACCESSED, 0);
- if (IS_ERR(folio) || !folio_test_uptodate(folio)) {
+ if (IS_ERR(folio) || !folio_test_uptodate(folio) || folio_test_locked(folio)) {
+ /*
+ * folio_test_locked is employed to detect ongoing folio
+ * migrations, since concurrent migrations can lead to
+ * bitmap inconsistency. And if we are not uptodate that
+ * implies somebody just created the folio but is yet to
+ * initialize it. We can drop the folio reference and
+ * try to get the folio with lock in both cases to avoid
+ * concurrency.
+ */
if (!IS_ERR(folio))
- /*
- * drop the folio reference and try
- * to get the folio with lock. If we
- * are not uptodate that implies
- * somebody just created the folio but
- * is yet to initialize it. So
- * wait for it to initialize.
- */
folio_put(folio);
folio = __filemap_get_folio(inode->i_mapping, pnum,
FGP_LOCK | FGP_ACCESSED | FGP_CREAT, gfp);
@@ -1689,7 +1688,7 @@ ext4_mb_load_buddy_gfp(struct super_block *sb, ext4_group_t group,
poff = block % blocks_per_page;

folio = __filemap_get_folio(inode->i_mapping, pnum, FGP_ACCESSED, 0);
- if (IS_ERR(folio) || !folio_test_uptodate(folio)) {
+ if (IS_ERR(folio) || !folio_test_uptodate(folio) || folio_test_locked(folio)) {
if (!IS_ERR(folio))
folio_put(folio);
folio = __filemap_get_folio(inode->i_mapping, pnum,
@@ -4097,8 +4096,7 @@ ext4_mb_mark_context(handle_t *handle, struct super_block *sb, bool state,
* Returns 0 if success or error code
*/
static noinline_for_stack int
-ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac,
- handle_t *handle, unsigned int reserv_clstrs)
+ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac, handle_t *handle)
{
struct ext4_group_desc *gdp;
struct ext4_sb_info *sbi;
@@ -4153,13 +4151,6 @@ ext4_mb_mark_diskspace_used(struct ext4_allocation_context *ac,
BUG_ON(changed != ac->ac_b_ex.fe_len);
#endif
percpu_counter_sub(&sbi->s_freeclusters_counter, ac->ac_b_ex.fe_len);
- /*
- * Now reduce the dirty block count also. Should not go negative
- */
- if (!(ac->ac_flags & EXT4_MB_DELALLOC_RESERVED))
- /* release all the reserved blocks if non delalloc */
- percpu_counter_sub(&sbi->s_dirtyclusters_counter,
- reserv_clstrs);

return err;
}
@@ -6244,7 +6235,7 @@ ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle,
ext4_mb_pa_put_free(ac);
}
if (likely(ac->ac_status == AC_STATUS_FOUND)) {
- *errp = ext4_mb_mark_diskspace_used(ac, handle, reserv_clstrs);
+ *errp = ext4_mb_mark_diskspace_used(ac, handle);
if (*errp) {
ext4_discard_allocated_blocks(ac);
goto errout;
@@ -6275,12 +6266,9 @@ ext4_fsblk_t ext4_mb_new_blocks(handle_t *handle,
out:
if (inquota && ar->len < inquota)
dquot_free_block(ar->inode, EXT4_C2B(sbi, inquota - ar->len));
- if (!ar->len) {
- if ((ar->flags & EXT4_MB_DELALLOC_RESERVED) == 0)
- /* release all the reserved blocks if non delalloc */
- percpu_counter_sub(&sbi->s_dirtyclusters_counter,
- reserv_clstrs);
- }
+ /* release any reserved blocks */
+ if (reserv_clstrs)
+ percpu_counter_sub(&sbi->s_dirtyclusters_counter, reserv_clstrs);

trace_ext4_allocate_blocks(ar, (unsigned long long)block);

diff --git a/fs/ext4/super.c b/fs/ext4/super.c
index d26a754bb9c83..9eec13f88d5a5 100644
--- a/fs/ext4/super.c
+++ b/fs/ext4/super.c
@@ -5534,6 +5534,10 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
clear_opt2(sb, MB_OPTIMIZE_SCAN);
}

+ err = ext4_percpu_param_init(sbi);
+ if (err)
+ goto failed_mount5;
+
err = ext4_mb_init(sb);
if (err) {
ext4_msg(sb, KERN_ERR, "failed to initialize mballoc (%d)",
@@ -5549,10 +5553,6 @@ static int __ext4_fill_super(struct fs_context *fc, struct super_block *sb)
sbi->s_journal->j_commit_callback =
ext4_journal_commit_callback;

- err = ext4_percpu_param_init(sbi);
- if (err)
- goto failed_mount6;
-
if (ext4_has_feature_flex_bg(sb))
if (!ext4_fill_flex_info(sb)) {
ext4_msg(sb, KERN_ERR,
@@ -5632,8 +5632,8 @@ failed_mount8: __maybe_unused
failed_mount6:
ext4_mb_release(sb);
ext4_flex_groups_free(sbi);
- ext4_percpu_param_destroy(sbi);
failed_mount5:
+ ext4_percpu_param_destroy(sbi);
ext4_ext_release(sb);
ext4_release_system_zone(sb);
failed_mount4a:
diff --git a/fs/fat/namei_msdos.c b/fs/fat/namei_msdos.c
index f06f6ba643cc8..c75e3791514a2 100644
--- a/fs/fat/namei_msdos.c
+++ b/fs/fat/namei_msdos.c
@@ -325,7 +325,12 @@ static int msdos_rmdir(struct inode *dir, struct dentry *dentry)
err = fat_remove_entries(dir, &sinfo); /* and releases bh */
if (err)
goto out;
- drop_nlink(dir);
+ if (dir->i_nlink >= 3)
+ drop_nlink(dir);
+ else {
+ fat_fs_error(sb, "parent dir link count too low (%u)",
+ dir->i_nlink);
+ }

clear_nlink(inode);
fat_truncate_time(inode, NULL, S_CTIME);
diff --git a/fs/fat/namei_vfat.c b/fs/fat/namei_vfat.c
index 15bf32c21ac0d..31d37e2b4f679 100644
--- a/fs/fat/namei_vfat.c
+++ b/fs/fat/namei_vfat.c
@@ -806,7 +806,12 @@ static int vfat_rmdir(struct inode *dir, struct dentry *dentry)
err = fat_remove_entries(dir, &sinfo); /* and releases bh */
if (err)
goto out;
- drop_nlink(dir);
+ if (dir->i_nlink >= 3)
+ drop_nlink(dir);
+ else {
+ fat_fs_error(sb, "parent dir link count too low (%u)",
+ dir->i_nlink);
+ }

clear_nlink(inode);
fat_truncate_time(inode, NULL, S_ATIME|S_MTIME);
diff --git a/fs/fs_struct.c b/fs/fs_struct.c
index 64c2d0814ed68..100bd3474476b 100644
--- a/fs/fs_struct.c
+++ b/fs/fs_struct.c
@@ -6,6 +6,7 @@
#include <linux/path.h>
#include <linux/slab.h>
#include <linux/fs_struct.h>
+#include <linux/init_task.h>
#include "internal.h"

/*
diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
index 28ad07b003484..776090fbc9aa5 100644
--- a/fs/gfs2/bmap.c
+++ b/fs/gfs2/bmap.c
@@ -1124,10 +1124,18 @@ static int gfs2_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
goto out_unlock;
break;
default:
- goto out_unlock;
+ goto out;
}

ret = gfs2_iomap_begin_write(inode, pos, length, flags, iomap, &mp);
+ if (ret)
+ goto out_unlock;
+
+out:
+ if (iomap->type == IOMAP_INLINE) {
+ iomap->private = metapath_dibh(&mp);
+ get_bh(iomap->private);
+ }

out_unlock:
release_metapath(&mp);
@@ -1141,6 +1149,9 @@ static int gfs2_iomap_end(struct inode *inode, loff_t pos, loff_t length,
struct gfs2_inode *ip = GFS2_I(inode);
struct gfs2_sbd *sdp = GFS2_SB(inode);

+ if (iomap->private)
+ brelse(iomap->private);
+
switch (flags & (IOMAP_WRITE | IOMAP_ZERO)) {
case IOMAP_WRITE:
if (flags & IOMAP_DIRECT)
diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
index 54d0eee24e10b..1a1cd631b5880 100644
--- a/fs/gfs2/glock.c
+++ b/fs/gfs2/glock.c
@@ -1393,31 +1393,45 @@ static int glocks_pending(unsigned int num_gh, struct gfs2_holder *ghs)
* gfs2_glock_async_wait - wait on multiple asynchronous glock acquisitions
* @num_gh: the number of holders in the array
* @ghs: the glock holder array
+ * @retries: number of retries attempted so far
*
* Returns: 0 on success, meaning all glocks have been granted and are held.
* -ESTALE if the request timed out, meaning all glocks were released,
* and the caller should retry the operation.
*/

-int gfs2_glock_async_wait(unsigned int num_gh, struct gfs2_holder *ghs)
+int gfs2_glock_async_wait(unsigned int num_gh, struct gfs2_holder *ghs,
+ unsigned int retries)
{
struct gfs2_sbd *sdp = ghs[0].gh_gl->gl_name.ln_sbd;
- int i, ret = 0, timeout = 0;
unsigned long start_time = jiffies;
+ int i, ret = 0;
+ long timeout;

might_sleep();
- /*
- * Total up the (minimum hold time * 2) of all glocks and use that to
- * determine the max amount of time we should wait.
- */
- for (i = 0; i < num_gh; i++)
- timeout += ghs[i].gh_gl->gl_hold_time << 1;

- if (!wait_event_timeout(sdp->sd_async_glock_wait,
+ timeout = GL_GLOCK_MIN_HOLD;
+ if (retries) {
+ unsigned int max_shift;
+ long incr;
+
+ /* Add a random delay and increase the timeout exponentially. */
+ max_shift = BITS_PER_LONG - 2 - __fls(GL_GLOCK_HOLD_INCR);
+ incr = min(GL_GLOCK_HOLD_INCR << min(retries - 1, max_shift),
+ 10 * HZ - GL_GLOCK_MIN_HOLD);
+ schedule_timeout_interruptible(get_random_long() % (incr / 3));
+ if (signal_pending(current))
+ goto interrupted;
+ timeout += (incr / 3) + get_random_long() % (incr / 3);
+ }
+
+ if (!wait_event_interruptible_timeout(sdp->sd_async_glock_wait,
!glocks_pending(num_gh, ghs), timeout)) {
ret = -ESTALE; /* request timed out. */
goto out;
}
+ if (signal_pending(current))
+ goto interrupted;

for (i = 0; i < num_gh; i++) {
struct gfs2_holder *gh = &ghs[i];
@@ -1441,6 +1455,10 @@ int gfs2_glock_async_wait(unsigned int num_gh, struct gfs2_holder *ghs)
}
}
return ret;
+
+interrupted:
+ ret = -EINTR;
+ goto out;
}

/**
diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h
index 63e101d448e96..b54cc21cac7e6 100644
--- a/fs/gfs2/glock.h
+++ b/fs/gfs2/glock.h
@@ -190,7 +190,8 @@ int gfs2_glock_poll(struct gfs2_holder *gh);
int gfs2_instantiate(struct gfs2_holder *gh);
int gfs2_glock_holder_ready(struct gfs2_holder *gh);
int gfs2_glock_wait(struct gfs2_holder *gh);
-int gfs2_glock_async_wait(unsigned int num_gh, struct gfs2_holder *ghs);
+int gfs2_glock_async_wait(unsigned int num_gh, struct gfs2_holder *ghs,
+ unsigned int retries);
void gfs2_glock_dq(struct gfs2_holder *gh);
void gfs2_glock_dq_wait(struct gfs2_holder *gh);
void gfs2_glock_dq_uninit(struct gfs2_holder *gh);
diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c
index 90c7a795112d6..c37079718fdd5 100644
--- a/fs/gfs2/inode.c
+++ b/fs/gfs2/inode.c
@@ -1504,7 +1504,7 @@ static int gfs2_rename(struct inode *odir, struct dentry *odentry,
unsigned int num_gh;
int dir_rename = 0;
struct gfs2_diradd da = { .nr_blocks = 0, .save_loc = 0, };
- unsigned int x;
+ unsigned int retries = 0, x;
int error;

gfs2_holder_mark_uninitialized(&r_gh);
@@ -1554,12 +1554,17 @@ static int gfs2_rename(struct inode *odir, struct dentry *odentry,
num_gh++;
}

+again:
for (x = 0; x < num_gh; x++) {
error = gfs2_glock_nq(ghs + x);
if (error)
goto out_gunlock;
}
- error = gfs2_glock_async_wait(num_gh, ghs);
+ error = gfs2_glock_async_wait(num_gh, ghs, retries);
+ if (error == -ESTALE) {
+ retries++;
+ goto again;
+ }
if (error)
goto out_gunlock;

@@ -1748,7 +1753,7 @@ static int gfs2_exchange(struct inode *odir, struct dentry *odentry,
struct gfs2_sbd *sdp = GFS2_SB(odir);
struct gfs2_holder ghs[4], r_gh;
unsigned int num_gh;
- unsigned int x;
+ unsigned int retries = 0, x;
umode_t old_mode = oip->i_inode.i_mode;
umode_t new_mode = nip->i_inode.i_mode;
int error;
@@ -1792,13 +1797,18 @@ static int gfs2_exchange(struct inode *odir, struct dentry *odentry,
gfs2_holder_init(nip->i_gl, LM_ST_EXCLUSIVE, GL_ASYNC, ghs + num_gh);
num_gh++;

+again:
for (x = 0; x < num_gh; x++) {
error = gfs2_glock_nq(ghs + x);
if (error)
goto out_gunlock;
}

- error = gfs2_glock_async_wait(num_gh, ghs);
+ error = gfs2_glock_async_wait(num_gh, ghs, retries);
+ if (error == -ESTALE) {
+ retries++;
+ goto again;
+ }
if (error)
goto out_gunlock;

@@ -2191,6 +2201,14 @@ static int gfs2_getattr(struct mnt_idmap *idmap,
return 0;
}

+static bool fault_in_fiemap(struct fiemap_extent_info *fi)
+{
+ struct fiemap_extent __user *dest = fi->fi_extents_start;
+ size_t size = sizeof(*dest) * fi->fi_extents_max;
+
+ return fault_in_safe_writeable((char __user *)dest, size) == 0;
+}
+
static int gfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
u64 start, u64 len)
{
@@ -2200,14 +2218,22 @@ static int gfs2_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,

inode_lock_shared(inode);

+retry:
ret = gfs2_glock_nq_init(ip->i_gl, LM_ST_SHARED, 0, &gh);
if (ret)
goto out;

+ pagefault_disable();
ret = iomap_fiemap(inode, fieinfo, start, len, &gfs2_iomap_ops);
+ pagefault_enable();

gfs2_glock_dq_uninit(&gh);

+ if (ret == -EFAULT && fault_in_fiemap(fieinfo)) {
+ fieinfo->fi_extents_mapped = 0;
+ goto retry;
+ }
+
out:
inode_unlock_shared(inode);
return ret;
diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
index 642584265a6f4..95b20e6a3fbc6 100644
--- a/fs/gfs2/quota.c
+++ b/fs/gfs2/quota.c
@@ -336,6 +336,7 @@ static void qd_put(struct gfs2_quota_data *qd)
lockref_mark_dead(&qd->qd_lockref);
spin_unlock(&qd->qd_lockref.lock);

+ list_lru_del_obj(&gfs2_qd_lru, &qd->qd_lru);
gfs2_qd_dispose(qd);
return;
}
diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
index c0089849be50e..fb437598e2625 100644
--- a/fs/hfsplus/bnode.c
+++ b/fs/hfsplus/bnode.c
@@ -629,7 +629,7 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num)
if (node) {
pr_crit("new node %u already hashed?\n", num);
WARN_ON(1);
- return node;
+ return ERR_PTR(-EEXIST);
}
node = __hfs_bnode_create(tree, num);
if (!node)
diff --git a/fs/hfsplus/inode.c b/fs/hfsplus/inode.c
index 2d68c52f894f9..3f64f68d625c1 100644
--- a/fs/hfsplus/inode.c
+++ b/fs/hfsplus/inode.c
@@ -601,6 +601,7 @@ int hfsplus_cat_read_inode(struct inode *inode, struct hfs_find_data *fd)
int hfsplus_cat_write_inode(struct inode *inode)
{
struct inode *main_inode = inode;
+ struct hfs_btree *tree = HFSPLUS_SB(inode->i_sb)->cat_tree;
struct hfs_find_data fd;
hfsplus_cat_entry entry;
int res = 0;
@@ -611,7 +612,7 @@ int hfsplus_cat_write_inode(struct inode *inode)
if (!main_inode->i_nlink)
return 0;

- if (hfs_find_init(HFSPLUS_SB(main_inode->i_sb)->cat_tree, &fd))
+ if (hfs_find_init(tree, &fd))
/* panic? */
return -EIO;

@@ -676,6 +677,15 @@ int hfsplus_cat_write_inode(struct inode *inode)
set_bit(HFSPLUS_I_CAT_DIRTY, &HFSPLUS_I(inode)->flags);
out:
hfs_find_exit(&fd);
+
+ if (!res) {
+ res = hfs_btree_write(tree);
+ if (res) {
+ pr_err("b-tree write err: %d, ino %lu\n",
+ res, inode->i_ino);
+ }
+ }
+
return res;
}

diff --git a/fs/hfsplus/super.c b/fs/hfsplus/super.c
index 0831cd7aa5deb..b4c7748c4cf98 100644
--- a/fs/hfsplus/super.c
+++ b/fs/hfsplus/super.c
@@ -52,6 +52,12 @@ static int hfsplus_system_read_inode(struct inode *inode)
return -EIO;
}

+ /*
+ * Assign a dummy file type, for may_open() requires that
+ * an inode has a valid file type.
+ */
+ inode->i_mode = S_IFREG;
+
return 0;
}

diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c
index 52dd8c9c3f6f0..5ccf215f932db 100644
--- a/fs/iomap/direct-io.c
+++ b/fs/iomap/direct-io.c
@@ -398,9 +398,13 @@ static loff_t iomap_dio_bio_iter(const struct iomap_iter *iter,
nr_pages = bio_iov_vecs_to_alloc(dio->submit.iter, BIO_MAX_VECS);
do {
size_t n;
- if (dio->error) {
- iov_iter_revert(dio->submit.iter, copied);
- copied = ret = 0;
+
+ /*
+ * If completions already occurred and reported errors, give up now and
+ * don't bother submitting more bios.
+ */
+ if (unlikely(data_race(dio->error))) {
+ ret = 0;
goto out;
}

diff --git a/fs/jfs/jfs_logmgr.c b/fs/jfs/jfs_logmgr.c
index 270808b6219be..a9cfa6307252d 100644
--- a/fs/jfs/jfs_logmgr.c
+++ b/fs/jfs/jfs_logmgr.c
@@ -2312,6 +2312,7 @@ int jfsIOWait(void *arg)
{
struct lbuf *bp;

+ set_freezable();
do {
spin_lock_irq(&log_redrive_lock);
while ((bp = log_redrive_list)) {
diff --git a/fs/jfs/namei.c b/fs/jfs/namei.c
index d68a4e6ac3454..e9bd7d3e61e9a 100644
--- a/fs/jfs/namei.c
+++ b/fs/jfs/namei.c
@@ -1228,7 +1228,7 @@ static int jfs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
jfs_err("jfs_rename: dtInsert returned -EIO");
goto out_tx;
}
- if (S_ISDIR(old_ip->i_mode))
+ if (S_ISDIR(old_ip->i_mode) && old_dir != new_dir)
inc_nlink(new_dir);
}
/*
@@ -1244,7 +1244,9 @@ static int jfs_rename(struct mnt_idmap *idmap, struct inode *old_dir,
goto out_tx;
}
if (S_ISDIR(old_ip->i_mode)) {
- drop_nlink(old_dir);
+ if (new_ip || old_dir != new_dir)
+ drop_nlink(old_dir);
+
if (old_dir != new_dir) {
/*
* Change inode number of parent for moved directory
diff --git a/fs/minix/inode.c b/fs/minix/inode.c
index fc01f9dc8c391..b38eec2760699 100644
--- a/fs/minix/inode.c
+++ b/fs/minix/inode.c
@@ -154,10 +154,38 @@ static int minix_reconfigure(struct fs_context *fc)
static bool minix_check_superblock(struct super_block *sb)
{
struct minix_sb_info *sbi = minix_sb(sb);
+ unsigned long block;

- if (sbi->s_imap_blocks == 0 || sbi->s_zmap_blocks == 0)
+ if (sbi->s_log_zone_size != 0) {
+ printk("minix-fs error: zone size must equal block size. "
+ "s_log_zone_size > 0 is not supported.\n");
+ return false;
+ }
+
+ if (sbi->s_ninodes < 1 || sbi->s_firstdatazone <= 4 ||
+ sbi->s_firstdatazone >= sbi->s_nzones)
return false;

+ /* Apparently minix can create filesystems that allocate more blocks for
+ * the bitmaps than needed. We simply ignore that, but verify it didn't
+ * create one with not enough blocks and bail out if so.
+ */
+ block = minix_blocks_needed(sbi->s_ninodes, sb->s_blocksize);
+ if (sbi->s_imap_blocks < block) {
+ printk("MINIX-fs: file system does not have enough "
+ "imap blocks allocated. Refusing to mount.\n");
+ return false;
+ }
+
+ block = minix_blocks_needed(
+ (sbi->s_nzones - sbi->s_firstdatazone + 1),
+ sb->s_blocksize);
+ if (sbi->s_zmap_blocks < block) {
+ printk("MINIX-fs: file system does not have enough "
+ "zmap blocks allocated. Refusing to mount.\n");
+ return false;
+ }
+
/*
* s_max_size must not exceed the block mapping limitation. This check
* is only needed for V1 filesystems, since V2/V3 support an extra level
@@ -277,26 +305,6 @@ static int minix_fill_super(struct super_block *s, struct fs_context *fc)
minix_set_bit(0,sbi->s_imap[0]->b_data);
minix_set_bit(0,sbi->s_zmap[0]->b_data);

- /* Apparently minix can create filesystems that allocate more blocks for
- * the bitmaps than needed. We simply ignore that, but verify it didn't
- * create one with not enough blocks and bail out if so.
- */
- block = minix_blocks_needed(sbi->s_ninodes, s->s_blocksize);
- if (sbi->s_imap_blocks < block) {
- printk("MINIX-fs: file system does not have enough "
- "imap blocks allocated. Refusing to mount.\n");
- goto out_no_bitmap;
- }
-
- block = minix_blocks_needed(
- (sbi->s_nzones - sbi->s_firstdatazone + 1),
- s->s_blocksize);
- if (sbi->s_zmap_blocks < block) {
- printk("MINIX-fs: file system does not have enough "
- "zmap blocks allocated. Refusing to mount.\n");
- goto out_no_bitmap;
- }
-
/* set up enough so that it can read an inode */
s->s_op = &minix_sops;
s->s_time_min = 0;
diff --git a/fs/namespace.c b/fs/namespace.c
index c3702f3303a89..3869d2b32ac21 100644
--- a/fs/namespace.c
+++ b/fs/namespace.c
@@ -5395,7 +5395,7 @@ SYSCALL_DEFINE4(statmount, const struct mnt_id_req __user *, req,

if (kreq.mnt_ns_id && (ns != current->nsproxy->mnt_ns) &&
!ns_capable_noaudit(ns->user_ns, CAP_SYS_ADMIN))
- return -ENOENT;
+ return -EPERM;

ks = kmalloc(sizeof(*ks), GFP_KERNEL_ACCOUNT);
if (!ks)
diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index 1cf1b2ddbf549..5b90f8727e683 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -72,7 +72,7 @@ const struct address_space_operations nfs_dir_aops = {
.free_folio = nfs_readdir_clear_array,
};

-#define NFS_INIT_DTSIZE PAGE_SIZE
+#define NFS_INIT_DTSIZE SZ_64K

static struct nfs_open_dir_context *
alloc_nfs_open_dir_context(struct inode *dir)
@@ -83,7 +83,7 @@ alloc_nfs_open_dir_context(struct inode *dir)
ctx = kzalloc(sizeof(*ctx), GFP_KERNEL_ACCOUNT);
if (ctx != NULL) {
ctx->attr_gencount = nfsi->attr_gencount;
- ctx->dtsize = NFS_INIT_DTSIZE;
+ ctx->dtsize = min(NFS_SERVER(dir)->dtsize, NFS_INIT_DTSIZE);
spin_lock(&dir->i_lock);
if (list_empty(&nfsi->open_files) &&
(nfsi->cache_validity & NFS_INO_DATA_INVAL_DEFER))
diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
index 82a053304ad59..84fe3ef21b6da 100644
--- a/fs/nfs/localio.c
+++ b/fs/nfs/localio.c
@@ -43,7 +43,6 @@ struct nfs_local_fsync_ctx {
struct nfsd_file *localio;
struct nfs_commit_data *data;
struct work_struct work;
- struct kref kref;
struct completion *done;
};
static void nfs_local_fsync_work(struct work_struct *work);
@@ -775,46 +774,38 @@ nfs_local_fsync_ctx_alloc(struct nfs_commit_data *data,
ctx->localio = localio;
ctx->data = data;
INIT_WORK(&ctx->work, nfs_local_fsync_work);
- kref_init(&ctx->kref);
ctx->done = NULL;
}
return ctx;
}

-static void
-nfs_local_fsync_ctx_kref_free(struct kref *kref)
-{
- kfree(container_of(kref, struct nfs_local_fsync_ctx, kref));
-}
-
-static void
-nfs_local_fsync_ctx_put(struct nfs_local_fsync_ctx *ctx)
-{
- kref_put(&ctx->kref, nfs_local_fsync_ctx_kref_free);
-}
-
static void
nfs_local_fsync_ctx_free(struct nfs_local_fsync_ctx *ctx)
{
nfs_local_release_commit_data(ctx->localio, ctx->data,
ctx->data->task.tk_ops);
- nfs_local_fsync_ctx_put(ctx);
+ kfree(ctx);
}

static void
nfs_local_fsync_work(struct work_struct *work)
{
+ unsigned long old_flags = current->flags;
struct nfs_local_fsync_ctx *ctx;
int status;

ctx = container_of(work, struct nfs_local_fsync_ctx, work);

+ current->flags |= PF_LOCAL_THROTTLE | PF_MEMALLOC_NOIO;
+
status = nfs_local_run_commit(nfs_to->nfsd_file_file(ctx->localio),
ctx->data);
nfs_local_commit_done(ctx->data, status);
if (ctx->done != NULL)
complete(ctx->done);
nfs_local_fsync_ctx_free(ctx);
+
+ current->flags = old_flags;
}

int nfs_local_commit(struct nfsd_file *localio,
@@ -823,7 +814,7 @@ int nfs_local_commit(struct nfsd_file *localio,
{
struct nfs_local_fsync_ctx *ctx;

- ctx = nfs_local_fsync_ctx_alloc(data, localio, GFP_KERNEL);
+ ctx = nfs_local_fsync_ctx_alloc(data, localio, GFP_NOIO);
if (!ctx) {
nfs_local_commit_done(data, -ENOMEM);
nfs_local_release_commit_data(localio, data, call_ops);
@@ -831,14 +822,14 @@ int nfs_local_commit(struct nfsd_file *localio,
}

nfs_local_init_commit(data, call_ops);
- kref_get(&ctx->kref);
+
if (how & FLUSH_SYNC) {
DECLARE_COMPLETION_ONSTACK(done);
ctx->done = &done;
- queue_work(nfsiod_workqueue, &ctx->work);
+ queue_work(nfslocaliod_workqueue, &ctx->work);
wait_for_completion(&done);
} else
- queue_work(nfsiod_workqueue, &ctx->work);
- nfs_local_fsync_ctx_put(ctx);
+ queue_work(nfslocaliod_workqueue, &ctx->work);
+
return 0;
}
diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
index 16981d0389c4c..116499e0f5cee 100644
--- a/fs/nfs/pnfs.c
+++ b/fs/nfs/pnfs.c
@@ -465,7 +465,8 @@ pnfs_mark_layout_stateid_invalid(struct pnfs_layout_hdr *lo,
};
struct pnfs_layout_segment *lseg, *next;

- set_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags);
+ if (test_and_set_bit(NFS_LAYOUT_INVALID_STID, &lo->plh_flags))
+ return !list_empty(&lo->plh_segs);
clear_bit(NFS_INO_LAYOUTCOMMIT, &NFS_I(lo->plh_inode)->flags);
list_for_each_entry_safe(lseg, next, &lo->plh_segs, pls_list)
pnfs_clear_lseg_state(lseg, lseg_list);
diff --git a/fs/nfsd/nfs2acl.c b/fs/nfsd/nfs2acl.c
index 5fb202acb0fd0..0ac538c761800 100644
--- a/fs/nfsd/nfs2acl.c
+++ b/fs/nfsd/nfs2acl.c
@@ -45,7 +45,7 @@ static __be32 nfsacld_proc_getacl(struct svc_rqst *rqstp)
inode = d_inode(fh->fh_dentry);

if (argp->mask & ~NFS_ACL_MASK) {
- resp->status = nfserr_inval;
+ resp->status = nfserr_io;
goto out;
}
resp->mask = argp->mask;
diff --git a/fs/nfsd/nfs4idmap.c b/fs/nfsd/nfs4idmap.c
index 8cca1329f3485..c319c31b0f647 100644
--- a/fs/nfsd/nfs4idmap.c
+++ b/fs/nfsd/nfs4idmap.c
@@ -643,34 +643,74 @@ static __be32 encode_name_from_id(struct xdr_stream *xdr,
return idmap_id_to_name(xdr, rqstp, type, id);
}

-__be32
-nfsd_map_name_to_uid(struct svc_rqst *rqstp, const char *name, size_t namelen,
- kuid_t *uid)
+/**
+ * nfsd_map_name_to_uid - Map user@domain to local UID
+ * @rqstp: RPC execution context
+ * @name: user@domain name to be mapped
+ * @namelen: length of name, in bytes
+ * @uid: OUT: mapped local UID value
+ *
+ * Returns nfs_ok on success or an NFSv4 status code on failure.
+ */
+__be32 nfsd_map_name_to_uid(struct svc_rqst *rqstp, const char *name,
+ size_t namelen, kuid_t *uid)
{
__be32 status;
u32 id = -1;

+ /*
+ * The idmap lookup below triggers an upcall that invokes
+ * cache_check(). RQ_USEDEFERRAL must be clear to prevent
+ * cache_check() from setting RQ_DROPME via svc_defer().
+ * NFSv4 servers are not permitted to drop requests. Also
+ * RQ_DROPME will force NFSv4.1 session slot processing to
+ * be skipped.
+ */
+ WARN_ON_ONCE(test_bit(RQ_USEDEFERRAL, &rqstp->rq_flags));
+
if (name == NULL || namelen == 0)
return nfserr_inval;

status = do_name_to_id(rqstp, IDMAP_TYPE_USER, name, namelen, &id);
+ if (status)
+ return status;
*uid = make_kuid(nfsd_user_namespace(rqstp), id);
if (!uid_valid(*uid))
status = nfserr_badowner;
return status;
}

-__be32
-nfsd_map_name_to_gid(struct svc_rqst *rqstp, const char *name, size_t namelen,
- kgid_t *gid)
+/**
+ * nfsd_map_name_to_gid - Map user@domain to local GID
+ * @rqstp: RPC execution context
+ * @name: user@domain name to be mapped
+ * @namelen: length of name, in bytes
+ * @gid: OUT: mapped local GID value
+ *
+ * Returns nfs_ok on success or an NFSv4 status code on failure.
+ */
+__be32 nfsd_map_name_to_gid(struct svc_rqst *rqstp, const char *name,
+ size_t namelen, kgid_t *gid)
{
__be32 status;
u32 id = -1;

+ /*
+ * The idmap lookup below triggers an upcall that invokes
+ * cache_check(). RQ_USEDEFERRAL must be clear to prevent
+ * cache_check() from setting RQ_DROPME via svc_defer().
+ * NFSv4 servers are not permitted to drop requests. Also
+ * RQ_DROPME will force NFSv4.1 session slot processing to
+ * be skipped.
+ */
+ WARN_ON_ONCE(test_bit(RQ_USEDEFERRAL, &rqstp->rq_flags));
+
if (name == NULL || namelen == 0)
return nfserr_inval;

status = do_name_to_id(rqstp, IDMAP_TYPE_GROUP, name, namelen, &id);
+ if (status)
+ return status;
*gid = make_kgid(nfsd_user_namespace(rqstp), id);
if (!gid_valid(*gid))
status = nfserr_badowner;
diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
index 05efa10ed84b7..2c7a8943cad9c 100644
--- a/fs/nfsd/nfs4proc.c
+++ b/fs/nfsd/nfs4proc.c
@@ -2818,8 +2818,6 @@ nfsd4_proc_compound(struct svc_rqst *rqstp)
BUG_ON(cstate->replay_owner);
out:
cstate->status = status;
- /* Reset deferral mechanism for RPC deferrals */
- set_bit(RQ_USEDEFERRAL, &rqstp->rq_flags);
return rpc_success;
}

diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
index fd81db17691a1..b7bdb9b44440b 100644
--- a/fs/nfsd/nfs4xdr.c
+++ b/fs/nfsd/nfs4xdr.c
@@ -5876,6 +5876,22 @@ nfs4svc_decode_compoundargs(struct svc_rqst *rqstp, struct xdr_stream *xdr)
args->ops = args->iops;
args->rqstp = rqstp;

+ /*
+ * NFSv4 operation decoders can invoke svc cache lookups
+ * that trigger svc_defer() when RQ_USEDEFERRAL is set,
+ * setting RQ_DROPME. This creates two problems:
+ *
+ * 1. Non-idempotency: Compounds make it too hard to avoid
+ * problems if a request is deferred and replayed.
+ *
+ * 2. Session slot leakage (NFSv4.1+): If RQ_DROPME is set
+ * during decode but SEQUENCE executes successfully, the
+ * session slot will be marked INUSE. The request is then
+ * dropped before encoding, so the slot is never released,
+ * rendering it permanently unusable by the client.
+ */
+ clear_bit(RQ_USEDEFERRAL, &rqstp->rq_flags);
+
return nfsd4_decode_compound(args);
}

diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
index 6dda081eb24c0..664171bdefb65 100644
--- a/fs/nfsd/nfsproc.c
+++ b/fs/nfsd/nfsproc.c
@@ -32,7 +32,7 @@ static __be32 nfsd_map_status(__be32 status)
break;
case nfserr_symlink:
case nfserr_wrong_type:
- status = nfserr_inval;
+ status = nfserr_io;
break;
}
return status;
diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
index dd459316529e8..4390c0e4bfb78 100644
--- a/fs/ntfs3/attrib.c
+++ b/fs/ntfs3/attrib.c
@@ -448,8 +448,10 @@ int attr_set_size(struct ntfs_inode *ni, enum ATTR_TYPE type,

is_ext = is_attr_ext(attr_b);
align = sbi->cluster_size;
- if (is_ext)
+ if (is_ext) {
align <<= attr_b->nres.c_unit;
+ keep_prealloc = false;
+ }

old_valid = le64_to_cpu(attr_b->nres.valid_size);
old_size = le64_to_cpu(attr_b->nres.data_size);
@@ -1353,19 +1355,28 @@ int attr_load_runs_range(struct ntfs_inode *ni, enum ATTR_TYPE type,
CLST vcn;
CLST vcn_last = (to - 1) >> cluster_bits;
CLST lcn, clen;
- int err;
+ int err = 0;
+ int retry = 0;

for (vcn = from >> cluster_bits; vcn <= vcn_last; vcn += clen) {
if (!run_lookup_entry(run, vcn, &lcn, &clen, NULL)) {
+ if (retry != 0) { /* Next run_lookup_entry(vcn) also failed. */
+ err = -EINVAL;
+ break;
+ }
err = attr_load_runs_vcn(ni, type, name, name_len, run,
vcn);
if (err)
- return err;
+ break;
+
clen = 0; /* Next run_lookup_entry(vcn) must be success. */
+ retry++;
}
+ else
+ retry = 0;
}

- return 0;
+ return err;
}

#ifdef CONFIG_NTFS3_LZX_XPRESS
diff --git a/fs/ntfs3/attrlist.c b/fs/ntfs3/attrlist.c
index a4d74bed74fab..098bd7e8c3d64 100644
--- a/fs/ntfs3/attrlist.c
+++ b/fs/ntfs3/attrlist.c
@@ -52,6 +52,11 @@ int ntfs_load_attr_list(struct ntfs_inode *ni, struct ATTRIB *attr)

if (!attr->non_res) {
lsize = le32_to_cpu(attr->res.data_size);
+ if (!lsize) {
+ err = -EINVAL;
+ goto out;
+ }
+
/* attr is resident: lsize < record_size (1K or 4K) */
le = kvmalloc(al_aligned(lsize), GFP_KERNEL);
if (!le) {
@@ -66,6 +71,10 @@ int ntfs_load_attr_list(struct ntfs_inode *ni, struct ATTRIB *attr)
u16 run_off = le16_to_cpu(attr->nres.run_off);

lsize = le64_to_cpu(attr->nres.data_size);
+ if (!lsize) {
+ err = -EINVAL;
+ goto out;
+ }

run_init(&ni->attr_list.run);

diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
index f1122ac5be622..3f144a049d710 100644
--- a/fs/ntfs3/file.c
+++ b/fs/ntfs3/file.c
@@ -964,7 +964,7 @@ static int ntfs_get_frame_pages(struct address_space *mapping, pgoff_t index,

folio = __filemap_get_folio(mapping, index,
FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
- gfp_mask);
+ gfp_mask | __GFP_ZERO);
if (IS_ERR(folio)) {
while (npages--) {
folio = page_folio(pages[npages]);
@@ -1045,8 +1045,12 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
goto out;

if (lcn == SPARSE_LCN) {
- ni->i_valid = valid =
- frame_vbo + ((u64)clen << sbi->cluster_bits);
+ valid = frame_vbo + ((u64)clen << sbi->cluster_bits);
+ if (ni->i_valid == valid) {
+ err = -EINVAL;
+ goto out;
+ }
+ ni->i_valid = valid;
continue;
}

diff --git a/fs/ntfs3/fslog.c b/fs/ntfs3/fslog.c
index d0d530f4e2b95..5afe00972924c 100644
--- a/fs/ntfs3/fslog.c
+++ b/fs/ntfs3/fslog.c
@@ -3431,6 +3431,9 @@ static int do_action(struct ntfs_log *log, struct OPEN_ATTR_ENRTY *oe,

e1 = Add2Ptr(attr, le16_to_cpu(lrh->attr_off));
esize = le16_to_cpu(e1->size);
+ if (PtrOffset(e1, Add2Ptr(hdr, used)) < esize)
+ goto dirty_vol;
+
e2 = Add2Ptr(e1, esize);

memmove(e1, e2, PtrOffset(e2, Add2Ptr(hdr, used)));
diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
index 57933576212bb..37c5d9a1f77b7 100644
--- a/fs/ntfs3/fsntfs.c
+++ b/fs/ntfs3/fsntfs.c
@@ -1276,6 +1276,12 @@ int ntfs_read_run_nb(struct ntfs_sb_info *sbi, const struct runs_tree *run,

} while (len32);

+ if (!run) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ /* Get next fragment to read. */
vcn_next = vcn + clen;
if (!run_get_entry(run, ++idx, &vcn, &lcn, &clen) ||
vcn != vcn_next) {
diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
index 6d1bf890929d9..050b3709e0204 100644
--- a/fs/ntfs3/index.c
+++ b/fs/ntfs3/index.c
@@ -1190,7 +1190,12 @@ int indx_find(struct ntfs_index *indx, struct ntfs_inode *ni,
return -EINVAL;
}

- fnd_push(fnd, node, e);
+ err = fnd_push(fnd, node, e);
+
+ if (err) {
+ put_indx_node(node);
+ return err;
+ }
}

*entry = e;
diff --git a/fs/ocfs2/xattr.c b/fs/ocfs2/xattr.c
index 73a6f6fd8a8ef..b1a18a944c850 100644
--- a/fs/ocfs2/xattr.c
+++ b/fs/ocfs2/xattr.c
@@ -6364,6 +6364,10 @@ static int ocfs2_reflink_xattr_header(handle_t *handle,
(void *)last - (void *)xe);
memset(last, 0,
sizeof(struct ocfs2_xattr_entry));
+ last = &new_xh->xh_entries[le16_to_cpu(new_xh->xh_count)] - 1;
+ } else {
+ memset(xe, 0, sizeof(struct ocfs2_xattr_entry));
+ last = NULL;
}

/*
diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
index 0ca8af060b0c1..e185f4f668b54 100644
--- a/fs/overlayfs/readdir.c
+++ b/fs/overlayfs/readdir.c
@@ -673,7 +673,7 @@ static bool ovl_fill_real(struct dir_context *ctx, const char *name,
container_of(ctx, struct ovl_readdir_translate, ctx);
struct dir_context *orig_ctx = rdt->orig_ctx;

- if (rdt->parent_ino && strcmp(name, "..") == 0) {
+ if (rdt->parent_ino && namelen == 2 && !strncmp(name, "..", 2)) {
ino = rdt->parent_ino;
} else if (rdt->cache) {
struct ovl_cache_entry *p;
diff --git a/fs/proc/array.c b/fs/proc/array.c
index 5e4f7b411fbdb..363d9331216b9 100644
--- a/fs/proc/array.c
+++ b/fs/proc/array.c
@@ -531,7 +531,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
}

sid = task_session_nr_ns(task, ns);
- ppid = task_tgid_nr_ns(task->real_parent, ns);
+ ppid = task_ppid_nr_ns(task, ns);
pgid = task_pgrp_nr_ns(task, ns);

unlock_task_sighand(task, &flags);
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index c5c1ad791c13e..fb01a071b83d7 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -581,7 +581,7 @@ static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg)
} else {
if (karg.build_id_size < build_id_sz) {
err = -ENAMETOOLONG;
- goto out;
+ goto out_file;
}
karg.build_id_size = build_id_sz;
}
@@ -609,6 +609,7 @@ static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg)
out:
query_vma_teardown(mm, vma);
mmput(mm);
+out_file:
if (vm_file)
fput(vm_file);
kfree(name_buf);
diff --git a/fs/pstore/ram_core.c b/fs/pstore/ram_core.c
index f1848cdd6d348..7b6d6378a3b87 100644
--- a/fs/pstore/ram_core.c
+++ b/fs/pstore/ram_core.c
@@ -298,6 +298,17 @@ void persistent_ram_save_old(struct persistent_ram_zone *prz)
if (!size)
return;

+ /*
+ * If the existing buffer is differently sized, free it so a new
+ * one is allocated. This can happen when persistent_ram_save_old()
+ * is called early in boot and later for a timer-triggered
+ * survivable crash when the crash dumps don't match in size
+ * (which would be extremely unlikely given kmsg buffers usually
+ * exceed prz buffer sizes).
+ */
+ if (prz->old_log && prz->old_log_size != size)
+ persistent_ram_free_old(prz);
+
if (!prz->old_log) {
persistent_ram_ecc_old(prz);
prz->old_log = kvzalloc(size, GFP_KERNEL);
@@ -446,6 +457,13 @@ static void *persistent_ram_vmap(phys_addr_t start, size_t size,
vaddr = vmap(pages, page_count, VM_MAP | VM_IOREMAP, prot);
kfree(pages);

+ /*
+ * vmap() may fail and return NULL. Do not add the offset in this
+ * case, otherwise a NULL mapping would appear successful.
+ */
+ if (!vaddr)
+ return NULL;
+
/*
* Since vmap() uses page granularity, we must add the offset
* into the page here, to get the byte granularity address
diff --git a/fs/quota/quota.c b/fs/quota/quota.c
index 290157bc7bec2..04c6712d4031c 100644
--- a/fs/quota/quota.c
+++ b/fs/quota/quota.c
@@ -899,6 +899,7 @@ static struct super_block *quotactl_block(const char __user *special, int cmd)
sb_start_write(sb);
sb_end_write(sb);
put_super(sb);
+ cond_resched();
goto retry;
}
return sb;
diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c
index e92a61e934e44..d83161285a175 100644
--- a/fs/smb/client/cached_dir.c
+++ b/fs/smb/client/cached_dir.c
@@ -769,11 +769,11 @@ static void cfids_laundromat_worker(struct work_struct *work)

dput(dentry);
if (cfid->is_open) {
- spin_lock(&cifs_tcp_ses_lock);
+ spin_lock(&cfid->tcon->tc_lock);
++cfid->tcon->tc_count;
trace_smb3_tcon_ref(cfid->tcon->debug_id, cfid->tcon->tc_count,
netfs_trace_tcon_ref_get_cached_laundromat);
- spin_unlock(&cifs_tcp_ses_lock);
+ spin_unlock(&cfid->tcon->tc_lock);
queue_work(serverclose_wq, &cfid->close_work);
} else
/*
diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
index 3b0f63e0a2534..3c83bda8298ee 100644
--- a/fs/smb/client/connect.c
+++ b/fs/smb/client/connect.c
@@ -4068,7 +4068,9 @@ cifs_setup_session(const unsigned int xid, struct cifs_ses *ses,
ses->ses_status = SES_IN_SETUP;

/* force iface_list refresh */
+ spin_lock(&ses->iface_lock);
ses->iface_last_update = 0;
+ spin_unlock(&ses->iface_lock);
}
spin_unlock(&ses->ses_lock);

diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c
index 414242a33d61a..b7ab18d4bedca 100644
--- a/fs/smb/client/smb2file.c
+++ b/fs/smb/client/smb2file.c
@@ -123,6 +123,8 @@ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32
&err_buftype);
if (rc == -EACCES && retry_without_read_attributes) {
free_rsp_buf(err_buftype, err_iov.iov_base);
+ memset(&err_iov, 0, sizeof(err_iov));
+ err_buftype = CIFS_NO_BUFFER;
oparms->desired_access &= ~FILE_READ_ATTRIBUTES;
rc = SMB2_open(xid, oparms, smb2_path, &smb2_oplock, smb2_data, NULL, &err_iov,
&err_buftype);
diff --git a/fs/smb/client/smb2misc.c b/fs/smb/client/smb2misc.c
index cddf273c14aed..5a820176b6f04 100644
--- a/fs/smb/client/smb2misc.c
+++ b/fs/smb/client/smb2misc.c
@@ -807,14 +807,14 @@ smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid,
int rc;

cifs_dbg(FYI, "%s: tc_count=%d\n", __func__, tcon->tc_count);
- spin_lock(&cifs_tcp_ses_lock);
+ spin_lock(&tcon->tc_lock);
if (tcon->tc_count <= 0) {
struct TCP_Server_Info *server = NULL;

trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
netfs_trace_tcon_ref_see_cancelled_close);
WARN_ONCE(tcon->tc_count < 0, "tcon refcount is negative");
- spin_unlock(&cifs_tcp_ses_lock);
+ spin_unlock(&tcon->tc_lock);

if (tcon->ses) {
server = tcon->ses->server;
@@ -828,7 +828,7 @@ smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid,
tcon->tc_count++;
trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
netfs_trace_tcon_ref_get_cancelled_close);
- spin_unlock(&cifs_tcp_ses_lock);
+ spin_unlock(&tcon->tc_lock);

rc = __smb2_handle_cancelled_cmd(tcon, SMB2_CLOSE_HE, 0,
persistent_fid, volatile_fid);
diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
index 4e7e6ad79966e..ab7e4c9f556f6 100644
--- a/fs/smb/client/smb2ops.c
+++ b/fs/smb/client/smb2ops.c
@@ -637,13 +637,6 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
p = buf;

spin_lock(&ses->iface_lock);
- /* do not query too frequently, this time with lock held */
- if (ses->iface_last_update &&
- time_before(jiffies, ses->iface_last_update +
- (SMB_INTERFACE_POLL_INTERVAL * HZ))) {
- spin_unlock(&ses->iface_lock);
- return 0;
- }

/*
* Go through iface_list and mark them as inactive
@@ -666,7 +659,6 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
"Empty network interface list returned by server %s\n",
ses->server->hostname);
rc = -EOPNOTSUPP;
- ses->iface_last_update = jiffies;
goto out;
}

@@ -795,8 +787,6 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+ sizeof(p->Next) && p->Next))
cifs_dbg(VFS, "%s: incomplete interface info\n", __func__);

- ses->iface_last_update = jiffies;
-
out:
/*
* Go through the list again and put the inactive entries
@@ -825,10 +815,17 @@ SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon, bool in_
struct TCP_Server_Info *pserver;

/* do not query too frequently */
+ spin_lock(&ses->iface_lock);
if (ses->iface_last_update &&
time_before(jiffies, ses->iface_last_update +
- (SMB_INTERFACE_POLL_INTERVAL * HZ)))
+ (SMB_INTERFACE_POLL_INTERVAL * HZ))) {
+ spin_unlock(&ses->iface_lock);
return 0;
+ }
+
+ ses->iface_last_update = jiffies;
+
+ spin_unlock(&ses->iface_lock);

rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,
FSCTL_QUERY_NETWORK_INTERFACE_INFO,
@@ -1190,6 +1187,7 @@ smb2_set_ea(const unsigned int xid, struct cifs_tcon *tcon,

replay_again:
/* reinitialize for possible replay */
+ used_len = 0;
flags = CIFS_CP_CREATE_CLOSE_OP;
oplock = SMB2_OPLOCK_LEVEL_NONE;
server = cifs_pick_channel(ses);
@@ -1588,6 +1586,7 @@ smb2_ioctl_query_info(const unsigned int xid,

replay_again:
/* reinitialize for possible replay */
+ buffer = NULL;
flags = CIFS_CP_CREATE_CLOSE_OP;
oplock = SMB2_OPLOCK_LEVEL_NONE;
server = cifs_pick_channel(ses);
@@ -3003,7 +3002,9 @@ smb2_get_dfs_refer(const unsigned int xid, struct cifs_ses *ses,
struct cifs_tcon,
tcon_list);
if (tcon) {
+ spin_lock(&tcon->tc_lock);
tcon->tc_count++;
+ spin_unlock(&tcon->tc_lock);
trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
netfs_trace_tcon_ref_get_dfs_refer);
}
@@ -3068,13 +3069,9 @@ smb2_get_dfs_refer(const unsigned int xid, struct cifs_ses *ses,
out:
if (tcon && !tcon->ipc) {
/* ipc tcons are not refcounted */
- spin_lock(&cifs_tcp_ses_lock);
- tcon->tc_count--;
+ cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_dfs_refer);
trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
netfs_trace_tcon_ref_dec_dfs_refer);
- /* tc_count can never go negative */
- WARN_ON(tcon->tc_count < 0);
- spin_unlock(&cifs_tcp_ses_lock);
}
kfree(utf16_path);
kfree(dfs_req);
diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
index b0ff9f7e8cea8..23e61ecf1b51c 100644
--- a/fs/smb/client/smb2pdu.c
+++ b/fs/smb/client/smb2pdu.c
@@ -2856,6 +2856,7 @@ int smb311_posix_mkdir(const unsigned int xid, struct inode *inode,

replay_again:
/* reinitialize for possible replay */
+ pc_buf = NULL;
flags = 0;
n_iov = 2;
server = cifs_pick_channel(ses);
@@ -4180,7 +4181,9 @@ void smb2_reconnect_server(struct work_struct *work)

list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
if (tcon->need_reconnect || tcon->need_reopen_files) {
+ spin_lock(&tcon->tc_lock);
tcon->tc_count++;
+ spin_unlock(&tcon->tc_lock);
trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
netfs_trace_tcon_ref_get_reconnect_server);
list_add_tail(&tcon->rlist, &tmp_list);
diff --git a/fs/smb/client/smbdirect.c b/fs/smb/client/smbdirect.c
index b1548269c308a..07f71a9481a36 100644
--- a/fs/smb/client/smbdirect.c
+++ b/fs/smb/client/smbdirect.c
@@ -86,8 +86,23 @@ int smbd_send_credit_target = 255;
/* The maximum single message size can be sent to remote peer */
int smbd_max_send_size = 1364;

-/* The maximum fragmented upper-layer payload receive size supported */
-int smbd_max_fragmented_recv_size = 1024 * 1024;
+/*
+ * The maximum fragmented upper-layer payload receive size supported
+ *
+ * Assume max_payload_per_credit is
+ * smbd_max_receive_size - 24 = 1340
+ *
+ * The maximum number would be
+ * smbd_receive_credit_max * max_payload_per_credit
+ *
+ * 1340 * 255 = 341700 (0x536C4)
+ *
+ * The minimum value from the spec is 131072 (0x20000)
+ *
+ * For now we use the logic we used in ksmbd before:
+ * (1364 * 255) / 2 = 173910 (0x2A756)
+ */
+int smbd_max_fragmented_recv_size = (1364 * 255) / 2;

/* The maximum single-message size which can be received */
int smbd_max_receive_size = 1364;
diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
index 9c3cc7c3300c2..fb9c631e0ec1e 100644
--- a/fs/smb/client/trace.h
+++ b/fs/smb/client/trace.h
@@ -59,6 +59,7 @@
EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \
EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \
EM(netfs_trace_tcon_ref_put_mnt_ctx, "PUT MntCtx") \
+ EM(netfs_trace_tcon_ref_put_dfs_refer, "PUT DfsRfr") \
EM(netfs_trace_tcon_ref_put_reconnect_server, "PUT Reconn") \
EM(netfs_trace_tcon_ref_put_tlink, "PUT Tlink ") \
EM(netfs_trace_tcon_ref_see_cancelled_close, "SEE Cn-Cls") \
diff --git a/fs/tests/exec_kunit.c b/fs/tests/exec_kunit.c
index 7c77d039680bb..f412d1a0f6bba 100644
--- a/fs/tests/exec_kunit.c
+++ b/fs/tests/exec_kunit.c
@@ -87,9 +87,6 @@ static const struct bprm_stack_limits_result bprm_stack_limits_results[] = {
.argc = 0, .envc = ARG_MAX / sizeof(void *) - 1 },
.expected_argmin = ULONG_MAX - sizeof(void *) },
/* Raising rlim_stack / 4 to _STK_LIM / 4 * 3 will see more space. */
- { { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * (_STK_LIM / 4 * 3),
- .argc = 0, .envc = 0 },
- .expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) },
{ { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * (_STK_LIM / 4 * 3),
.argc = 0, .envc = 0 },
.expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) },
@@ -103,9 +100,6 @@ static const struct bprm_stack_limits_result bprm_stack_limits_results[] = {
{ { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * _STK_LIM,
.argc = 0, .envc = 0 },
.expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) },
- { { .p = ULONG_MAX, .rlim_stack.rlim_cur = 4 * _STK_LIM,
- .argc = 0, .envc = 0 },
- .expected_argmin = ULONG_MAX - (_STK_LIM / 4 * 3) + sizeof(void *) },
};

static void exec_test_bprm_stack_limits(struct kunit *test)
diff --git a/fs/xfs/libxfs/xfs_attr.c b/fs/xfs/libxfs/xfs_attr.c
index c63da14eee043..68d4980d2d193 100644
--- a/fs/xfs/libxfs/xfs_attr.c
+++ b/fs/xfs/libxfs/xfs_attr.c
@@ -50,7 +50,6 @@ STATIC int xfs_attr_shortform_addname(xfs_da_args_t *args);
*/
STATIC int xfs_attr_leaf_get(xfs_da_args_t *args);
STATIC int xfs_attr_leaf_removename(xfs_da_args_t *args);
-STATIC int xfs_attr_leaf_hasname(struct xfs_da_args *args, struct xfs_buf **bp);

/*
* Internal routines when attribute list is more than one block.
@@ -979,11 +978,12 @@ xfs_attr_lookup(
return error;

if (xfs_attr_is_leaf(dp)) {
- error = xfs_attr_leaf_hasname(args, &bp);
-
- if (bp)
- xfs_trans_brelse(args->trans, bp);
-
+ error = xfs_attr3_leaf_read(args->trans, args->dp, args->owner,
+ 0, &bp);
+ if (error)
+ return error;
+ error = xfs_attr3_leaf_lookup_int(bp, args);
+ xfs_trans_brelse(args->trans, bp);
return error;
}

@@ -1221,27 +1221,6 @@ xfs_attr_shortform_addname(
* External routines when attribute list is one block
*========================================================================*/

-/*
- * Return EEXIST if attr is found, or ENOATTR if not
- */
-STATIC int
-xfs_attr_leaf_hasname(
- struct xfs_da_args *args,
- struct xfs_buf **bp)
-{
- int error = 0;
-
- error = xfs_attr3_leaf_read(args->trans, args->dp, args->owner, 0, bp);
- if (error)
- return error;
-
- error = xfs_attr3_leaf_lookup_int(*bp, args);
- if (error != -ENOATTR && error != -EEXIST)
- xfs_trans_brelse(args->trans, *bp);
-
- return error;
-}
-
/*
* Remove a name from the leaf attribute list structure
*
@@ -1252,25 +1231,22 @@ STATIC int
xfs_attr_leaf_removename(
struct xfs_da_args *args)
{
- struct xfs_inode *dp;
- struct xfs_buf *bp;
+ struct xfs_inode *dp = args->dp;
int error, forkoff;
+ struct xfs_buf *bp;

trace_xfs_attr_leaf_removename(args);

- /*
- * Remove the attribute.
- */
- dp = args->dp;
-
- error = xfs_attr_leaf_hasname(args, &bp);
- if (error == -ENOATTR) {
+ error = xfs_attr3_leaf_read(args->trans, args->dp, args->owner, 0, &bp);
+ if (error)
+ return error;
+ error = xfs_attr3_leaf_lookup_int(bp, args);
+ if (error != -EEXIST) {
xfs_trans_brelse(args->trans, bp);
- if (args->op_flags & XFS_DA_OP_RECOVERY)
+ if (error == -ENOATTR && (args->op_flags & XFS_DA_OP_RECOVERY))
return 0;
return error;
- } else if (error != -EEXIST)
- return error;
+ }

xfs_attr3_leaf_remove(bp, args);

@@ -1294,23 +1270,20 @@ xfs_attr_leaf_removename(
* Returns 0 on successful retrieval, otherwise an error.
*/
STATIC int
-xfs_attr_leaf_get(xfs_da_args_t *args)
+xfs_attr_leaf_get(
+ struct xfs_da_args *args)
{
- struct xfs_buf *bp;
- int error;
+ struct xfs_buf *bp;
+ int error;

trace_xfs_attr_leaf_get(args);

- error = xfs_attr_leaf_hasname(args, &bp);
-
- if (error == -ENOATTR) {
- xfs_trans_brelse(args->trans, bp);
- return error;
- } else if (error != -EEXIST)
+ error = xfs_attr3_leaf_read(args->trans, args->dp, args->owner, 0, &bp);
+ if (error)
return error;
-
-
- error = xfs_attr3_leaf_getvalue(bp, args);
+ error = xfs_attr3_leaf_lookup_int(bp, args);
+ if (error == -EEXIST)
+ error = xfs_attr3_leaf_getvalue(bp, args);
xfs_trans_brelse(args->trans, bp);
return error;
}
diff --git a/fs/xfs/libxfs/xfs_attr_leaf.c b/fs/xfs/libxfs/xfs_attr_leaf.c
index fddb55605e0cc..bfcac9036c112 100644
--- a/fs/xfs/libxfs/xfs_attr_leaf.c
+++ b/fs/xfs/libxfs/xfs_attr_leaf.c
@@ -1489,6 +1489,7 @@ xfs_attr3_leaf_add_work(
struct xfs_attr_leaf_name_local *name_loc;
struct xfs_attr_leaf_name_remote *name_rmt;
struct xfs_mount *mp;
+ int old_end, new_end;
int tmp;
int i;

@@ -1581,17 +1582,49 @@ xfs_attr3_leaf_add_work(
if (be16_to_cpu(entry->nameidx) < ichdr->firstused)
ichdr->firstused = be16_to_cpu(entry->nameidx);

- ASSERT(ichdr->firstused >= ichdr->count * sizeof(xfs_attr_leaf_entry_t)
- + xfs_attr3_leaf_hdr_size(leaf));
- tmp = (ichdr->count - 1) * sizeof(xfs_attr_leaf_entry_t)
- + xfs_attr3_leaf_hdr_size(leaf);
+ new_end = ichdr->count * sizeof(struct xfs_attr_leaf_entry) +
+ xfs_attr3_leaf_hdr_size(leaf);
+ old_end = new_end - sizeof(struct xfs_attr_leaf_entry);
+
+ ASSERT(ichdr->firstused >= new_end);

for (i = 0; i < XFS_ATTR_LEAF_MAPSIZE; i++) {
- if (ichdr->freemap[i].base == tmp) {
- ichdr->freemap[i].base += sizeof(xfs_attr_leaf_entry_t);
+ int diff = 0;
+
+ if (ichdr->freemap[i].base == old_end) {
+ /*
+ * This freemap entry starts at the old end of the
+ * leaf entry array, so we need to adjust its base
+ * upward to accomodate the larger array.
+ */
+ diff = sizeof(struct xfs_attr_leaf_entry);
+ } else if (ichdr->freemap[i].size > 0 &&
+ ichdr->freemap[i].base < new_end) {
+ /*
+ * This freemap entry starts in the space claimed by
+ * the new leaf entry. Adjust its base upward to
+ * reflect that.
+ */
+ diff = new_end - ichdr->freemap[i].base;
+ }
+
+ if (diff) {
+ ichdr->freemap[i].base += diff;
ichdr->freemap[i].size -=
- min_t(uint16_t, ichdr->freemap[i].size,
- sizeof(xfs_attr_leaf_entry_t));
+ min_t(uint16_t, ichdr->freemap[i].size, diff);
+ }
+
+ /*
+ * Don't leave zero-length freemaps with nonzero base lying
+ * around, because we don't want the code in _remove that
+ * matches on base address to get confused and create
+ * overlapping freemaps. If we end up with no freemap entries
+ * then the next _add will compact the leaf block and
+ * regenerate the freemaps.
+ */
+ if (ichdr->freemap[i].size == 0 && ichdr->freemap[i].base > 0) {
+ ichdr->freemap[i].base = 0;
+ ichdr->holes = 1;
}
}
ichdr->usedbytes += xfs_attr_leaf_entsize(leaf, args->index);
diff --git a/fs/xfs/scrub/agheader_repair.c b/fs/xfs/scrub/agheader_repair.c
index 69b003259784f..a93686df6f67b 100644
--- a/fs/xfs/scrub/agheader_repair.c
+++ b/fs/xfs/scrub/agheader_repair.c
@@ -837,8 +837,12 @@ xrep_agi_buf_cleanup(
{
struct xrep_agi *ragi = buf;

- xfarray_destroy(ragi->iunlink_prev);
- xfarray_destroy(ragi->iunlink_next);
+ if (ragi->iunlink_prev)
+ xfarray_destroy(ragi->iunlink_prev);
+ ragi->iunlink_prev = NULL;
+ if (ragi->iunlink_next)
+ xfarray_destroy(ragi->iunlink_next);
+ ragi->iunlink_next = NULL;
xagino_bitmap_destroy(&ragi->iunlink_bmp);
}

diff --git a/fs/xfs/scrub/alloc_repair.c b/fs/xfs/scrub/alloc_repair.c
index 30295898cc8a6..b43b45fec4f59 100644
--- a/fs/xfs/scrub/alloc_repair.c
+++ b/fs/xfs/scrub/alloc_repair.c
@@ -925,7 +925,22 @@ xrep_revalidate_allocbt(
if (error)
goto out;

+ /*
+ * If the bnobt is still corrupt, we've failed to repair the filesystem
+ * and should just bail out.
+ *
+ * If the bnobt fails cross-examination with the cntbt, the scan will
+ * free the cntbt cursor, so we need to mark the repair incomplete
+ * and avoid walking off the end of the NULL cntbt cursor.
+ */
+ if (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
+ goto out;
+
sc->sm->sm_type = XFS_SCRUB_TYPE_CNTBT;
+ if (!sc->sa.cnt_cur) {
+ xchk_set_incomplete(sc);
+ goto out;
+ }
error = xchk_allocbt(sc);
out:
sc->sm->sm_type = old_type;
diff --git a/fs/xfs/scrub/attr.c b/fs/xfs/scrub/attr.c
index 708334f9b2bd1..a0878fdbcf386 100644
--- a/fs/xfs/scrub/attr.c
+++ b/fs/xfs/scrub/attr.c
@@ -287,32 +287,6 @@ xchk_xattr_set_map(
return ret;
}

-/*
- * Check the leaf freemap from the usage bitmap. Returns false if the
- * attr freemap has problems or points to used space.
- */
-STATIC bool
-xchk_xattr_check_freemap(
- struct xfs_scrub *sc,
- struct xfs_attr3_icleaf_hdr *leafhdr)
-{
- struct xchk_xattr_buf *ab = sc->buf;
- unsigned int mapsize = sc->mp->m_attr_geo->blksize;
- int i;
-
- /* Construct bitmap of freemap contents. */
- bitmap_zero(ab->freemap, mapsize);
- for (i = 0; i < XFS_ATTR_LEAF_MAPSIZE; i++) {
- if (!xchk_xattr_set_map(sc, ab->freemap,
- leafhdr->freemap[i].base,
- leafhdr->freemap[i].size))
- return false;
- }
-
- /* Look for bits that are set in freemap and are marked in use. */
- return !bitmap_intersects(ab->freemap, ab->usedmap, mapsize);
-}
-
/*
* Check this leaf entry's relations to everything else.
* Returns the number of bytes used for the name/value data.
@@ -364,7 +338,10 @@ xchk_xattr_entry(
rentry = xfs_attr3_leaf_name_remote(leaf, idx);
namesize = xfs_attr_leaf_entsize_remote(rentry->namelen);
name_end = (char *)rentry + namesize;
- if (rentry->namelen == 0 || rentry->valueblk == 0)
+ if (rentry->namelen == 0)
+ xchk_da_set_corrupt(ds, level);
+ if (rentry->valueblk == 0 &&
+ !(ent->flags & XFS_ATTR_INCOMPLETE))
xchk_da_set_corrupt(ds, level);
}
if (name_end > buf_end)
@@ -403,6 +380,7 @@ xchk_xattr_block(

*last_checked = blk->blkno;
bitmap_zero(ab->usedmap, mp->m_attr_geo->blksize);
+ bitmap_zero(ab->freemap, mp->m_attr_geo->blksize);

/* Check all the padding. */
if (xfs_has_crc(ds->sc->mp)) {
@@ -449,6 +427,9 @@ xchk_xattr_block(
if ((char *)&entries[leafhdr.count] > (char *)leaf + leafhdr.firstused)
xchk_da_set_corrupt(ds, level);

+ if (ds->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
+ goto out;
+
buf_end = (char *)bp->b_addr + mp->m_attr_geo->blksize;
for (i = 0, ent = entries; i < leafhdr.count; ent++, i++) {
/* Mark the leaf entry itself. */
@@ -467,7 +448,29 @@ xchk_xattr_block(
goto out;
}

- if (!xchk_xattr_check_freemap(ds->sc, &leafhdr))
+ /* Construct bitmap of freemap contents. */
+ for (i = 0; i < XFS_ATTR_LEAF_MAPSIZE; i++) {
+ if (!xchk_xattr_set_map(ds->sc, ab->freemap,
+ leafhdr.freemap[i].base,
+ leafhdr.freemap[i].size))
+ xchk_da_set_corrupt(ds, level);
+
+ /*
+ * freemap entries with zero length and nonzero base can cause
+ * problems with older kernels, so we mark these for preening
+ * even though there's no inconsistency.
+ */
+ if (leafhdr.freemap[i].size == 0 &&
+ leafhdr.freemap[i].base > 0)
+ xchk_da_set_preen(ds, level);
+
+ if (ds->sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT)
+ goto out;
+ }
+
+ /* Look for bits that are set in freemap and are marked in use. */
+ if (bitmap_intersects(ab->freemap, ab->usedmap,
+ mp->m_attr_geo->blksize))
xchk_da_set_corrupt(ds, level);

if (leafhdr.usedbytes != usedbytes)
diff --git a/fs/xfs/scrub/attr_repair.c b/fs/xfs/scrub/attr_repair.c
index 09d63aa10314b..50ab476dddb98 100644
--- a/fs/xfs/scrub/attr_repair.c
+++ b/fs/xfs/scrub/attr_repair.c
@@ -1516,8 +1516,10 @@ xrep_xattr_teardown(
xfblob_destroy(rx->pptr_names);
if (rx->pptr_recs)
xfarray_destroy(rx->pptr_recs);
- xfblob_destroy(rx->xattr_blobs);
- xfarray_destroy(rx->xattr_records);
+ if (rx->xattr_blobs)
+ xfblob_destroy(rx->xattr_blobs);
+ if (rx->xattr_records)
+ xfarray_destroy(rx->xattr_records);
mutex_destroy(&rx->lock);
kfree(rx);
}
diff --git a/fs/xfs/scrub/btree.c b/fs/xfs/scrub/btree.c
index b8d5a02334b88..d2b43793f0ef5 100644
--- a/fs/xfs/scrub/btree.c
+++ b/fs/xfs/scrub/btree.c
@@ -42,6 +42,8 @@ __xchk_btree_process_error(
break;
case -EFSBADCRC:
case -EFSCORRUPTED:
+ case -EIO:
+ case -ENODATA:
/* Note the badness but don't abort. */
sc->sm->sm_flags |= errflag;
*error = 0;
diff --git a/fs/xfs/scrub/common.c b/fs/xfs/scrub/common.c
index 22f5f1a9d3f09..d1b0c8b3a3216 100644
--- a/fs/xfs/scrub/common.c
+++ b/fs/xfs/scrub/common.c
@@ -98,6 +98,8 @@ __xchk_process_error(
break;
case -EFSBADCRC:
case -EFSCORRUPTED:
+ case -EIO:
+ case -ENODATA:
/* Note the badness but don't abort. */
sc->sm->sm_flags |= errflag;
*error = 0;
@@ -161,6 +163,8 @@ __xchk_fblock_process_error(
break;
case -EFSBADCRC:
case -EFSCORRUPTED:
+ case -EIO:
+ case -ENODATA:
/* Note the badness but don't abort. */
sc->sm->sm_flags |= errflag;
*error = 0;
@@ -1203,6 +1207,9 @@ xchk_metadata_inode_subtype(
int error;

sub = xchk_scrub_create_subord(sc, scrub_type);
+ if (!sub)
+ return -ENOMEM;
+
error = sub->sc.ops->scrub(&sub->sc);
xchk_scrub_free_subord(sub);
return error;
diff --git a/fs/xfs/scrub/dabtree.c b/fs/xfs/scrub/dabtree.c
index 056de4819f866..a6a5d3a75d994 100644
--- a/fs/xfs/scrub/dabtree.c
+++ b/fs/xfs/scrub/dabtree.c
@@ -45,6 +45,8 @@ xchk_da_process_error(
break;
case -EFSBADCRC:
case -EFSCORRUPTED:
+ case -EIO:
+ case -ENODATA:
/* Note the badness but don't abort. */
sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
*error = 0;
diff --git a/fs/xfs/scrub/dir_repair.c b/fs/xfs/scrub/dir_repair.c
index 64679fe084465..a9e2aa5c3968d 100644
--- a/fs/xfs/scrub/dir_repair.c
+++ b/fs/xfs/scrub/dir_repair.c
@@ -172,8 +172,12 @@ xrep_dir_teardown(
struct xrep_dir *rd = sc->buf;

xrep_findparent_scan_teardown(&rd->pscan);
- xfblob_destroy(rd->dir_names);
- xfarray_destroy(rd->dir_entries);
+ if (rd->dir_names)
+ xfblob_destroy(rd->dir_names);
+ rd->dir_names = NULL;
+ if (rd->dir_entries)
+ xfarray_destroy(rd->dir_entries);
+ rd->dir_entries = NULL;
}

/* Set up for a directory repair. */
diff --git a/fs/xfs/scrub/dirtree.c b/fs/xfs/scrub/dirtree.c
index bde58fb561ea1..73d40a2600a2a 100644
--- a/fs/xfs/scrub/dirtree.c
+++ b/fs/xfs/scrub/dirtree.c
@@ -81,8 +81,12 @@ xchk_dirtree_buf_cleanup(
kfree(path);
}

- xfblob_destroy(dl->path_names);
- xfarray_destroy(dl->path_steps);
+ if (dl->path_names)
+ xfblob_destroy(dl->path_names);
+ dl->path_names = NULL;
+ if (dl->path_steps)
+ xfarray_destroy(dl->path_steps);
+ dl->path_steps = NULL;
mutex_destroy(&dl->lock);
}

diff --git a/fs/xfs/scrub/ialloc_repair.c b/fs/xfs/scrub/ialloc_repair.c
index c8d2196a04e15..6cb2c9d115092 100644
--- a/fs/xfs/scrub/ialloc_repair.c
+++ b/fs/xfs/scrub/ialloc_repair.c
@@ -873,10 +873,24 @@ xrep_revalidate_iallocbt(
if (error)
goto out;

- if (xfs_has_finobt(sc->mp)) {
- sc->sm->sm_type = XFS_SCRUB_TYPE_FINOBT;
- error = xchk_iallocbt(sc);
+ /*
+ * If the inobt is still corrupt, we've failed to repair the filesystem
+ * and should just bail out.
+ *
+ * If the inobt fails cross-examination with the finobt, the scan will
+ * free the finobt cursor, so we need to mark the repair incomplete
+ * and avoid walking off the end of the NULL finobt cursor.
+ */
+ if (!xfs_has_finobt(sc->mp) ||
+ (sc->sm->sm_flags & XFS_SCRUB_OFLAG_CORRUPT))
+ goto out;
+
+ sc->sm->sm_type = XFS_SCRUB_TYPE_FINOBT;
+ if (!sc->sa.fino_cur) {
+ xchk_set_incomplete(sc);
+ goto out;
}
+ error = xchk_iallocbt(sc);

out:
sc->sm->sm_type = old_type;
diff --git a/fs/xfs/scrub/nlinks.c b/fs/xfs/scrub/nlinks.c
index 02f5522552dbf..7e917a5e9ca29 100644
--- a/fs/xfs/scrub/nlinks.c
+++ b/fs/xfs/scrub/nlinks.c
@@ -975,7 +975,8 @@ xchk_nlinks_teardown_scan(

xfs_dir_hook_del(xnc->sc->mp, &xnc->dhook);

- xfarray_destroy(xnc->nlinks);
+ if (xnc->nlinks)
+ xfarray_destroy(xnc->nlinks);
xnc->nlinks = NULL;

xchk_iscan_teardown(&xnc->collect_iscan);
diff --git a/fs/xfs/scrub/repair.c b/fs/xfs/scrub/repair.c
index 155bbaaa496e4..c19abc687072f 100644
--- a/fs/xfs/scrub/repair.c
+++ b/fs/xfs/scrub/repair.c
@@ -1016,6 +1016,9 @@ xrep_metadata_inode_subtype(
* setup/teardown routines.
*/
sub = xchk_scrub_create_subord(sc, scrub_type);
+ if (!sub)
+ return -ENOMEM;
+
error = sub->sc.ops->scrub(&sub->sc);
if (error)
goto out;
diff --git a/fs/xfs/scrub/scrub.c b/fs/xfs/scrub/scrub.c
index 5c266d2842dbe..9bcea8cbb7a61 100644
--- a/fs/xfs/scrub/scrub.c
+++ b/fs/xfs/scrub/scrub.c
@@ -571,7 +571,7 @@ xchk_scrub_create_subord(

sub = kzalloc(sizeof(*sub), XCHK_GFP_FLAGS);
if (!sub)
- return ERR_PTR(-ENOMEM);
+ return NULL;

sub->old_smtype = sc->sm->sm_type;
sub->old_smflags = sc->sm->sm_flags;
diff --git a/include/acpi/ghes.h b/include/acpi/ghes.h
index be1dd4c1a9174..16646fdd1f841 100644
--- a/include/acpi/ghes.h
+++ b/include/acpi/ghes.h
@@ -21,6 +21,7 @@ struct ghes {
struct acpi_hest_generic_v2 *generic_v2;
};
struct acpi_hest_generic_status *estatus;
+ unsigned int estatus_length;
unsigned long flags;
union {
struct list_head list;
diff --git a/include/asm-generic/audit_change_attr.h b/include/asm-generic/audit_change_attr.h
index 331670807cf01..6c311d4d37f4e 100644
--- a/include/asm-generic/audit_change_attr.h
+++ b/include/asm-generic/audit_change_attr.h
@@ -20,6 +20,9 @@ __NR_fremovexattr,
__NR_fchownat,
__NR_fchmodat,
#endif
+#ifdef __NR_fchmodat2
+__NR_fchmodat2,
+#endif
#ifdef __NR_chown32
__NR_chown32,
__NR_fchown32,
diff --git a/include/asm-generic/audit_read.h b/include/asm-generic/audit_read.h
index 7bb7b5a83ae2e..fb9991f53fb6f 100644
--- a/include/asm-generic/audit_read.h
+++ b/include/asm-generic/audit_read.h
@@ -4,9 +4,15 @@ __NR_readlink,
#endif
__NR_quotactl,
__NR_listxattr,
+#ifdef __NR_listxattrat
+__NR_listxattrat,
+#endif
__NR_llistxattr,
__NR_flistxattr,
__NR_getxattr,
+#ifdef __NR_getxattrat
+__NR_getxattrat,
+#endif
__NR_lgetxattr,
__NR_fgetxattr,
#ifdef __NR_readlinkat
diff --git a/include/drm/drm_of.h b/include/drm/drm_of.h
index 02d1cdd7f798c..f751f536a6895 100644
--- a/include/drm/drm_of.h
+++ b/include/drm/drm_of.h
@@ -5,6 +5,7 @@
#include <linux/err.h>
#include <linux/of_graph.h>
#if IS_ENABLED(CONFIG_OF) && IS_ENABLED(CONFIG_DRM_PANEL_BRIDGE)
+#include <linux/of.h>
#include <drm/drm_bridge.h>
#endif

@@ -164,6 +165,8 @@ static inline int drm_of_panel_bridge_remove(const struct device_node *np,
bridge = of_drm_find_bridge(remote);
drm_panel_bridge_remove(bridge);

+ of_node_put(remote);
+
return 0;
#else
return -EINVAL;
diff --git a/include/linux/audit.h b/include/linux/audit.h
index e3f06eba9c6e6..73fb8a4bcf2ae 100644
--- a/include/linux/audit.h
+++ b/include/linux/audit.h
@@ -126,12 +126,6 @@ enum audit_nfcfgop {
extern int __init audit_register_class(int class, unsigned *list);
extern int audit_classify_syscall(int abi, unsigned syscall);
extern int audit_classify_arch(int arch);
-/* only for compat system calls */
-extern unsigned compat_write_class[];
-extern unsigned compat_read_class[];
-extern unsigned compat_dir_class[];
-extern unsigned compat_chattr_class[];
-extern unsigned compat_signal_class[];

/* audit_names->type values */
#define AUDIT_TYPE_UNKNOWN 0 /* we don't know yet */
diff --git a/include/linux/audit_arch.h b/include/linux/audit_arch.h
index 0e34d673ef171..2b8153791e6a5 100644
--- a/include/linux/audit_arch.h
+++ b/include/linux/audit_arch.h
@@ -23,4 +23,11 @@ enum auditsc_class_t {

extern int audit_classify_compat_syscall(int abi, unsigned syscall);

+/* only for compat system calls */
+extern unsigned compat_write_class[];
+extern unsigned compat_read_class[];
+extern unsigned compat_dir_class[];
+extern unsigned compat_chattr_class[];
+extern unsigned compat_signal_class[];
+
#endif
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 1289b8e487801..80ca2fb879504 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -425,6 +425,8 @@ void __bio_add_page(struct bio *bio, struct page *page,
unsigned int len, unsigned int off);
void bio_add_folio_nofail(struct bio *bio, struct folio *folio, size_t len,
size_t off);
+void bio_add_virt_nofail(struct bio *bio, void *vaddr, unsigned len);
+
int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter);
void bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter);
void __bio_release_pages(struct bio *bio, bool mark_dirty);
diff --git a/include/linux/capability.h b/include/linux/capability.h
index 0c356a5179917..767c535dbd38e 100644
--- a/include/linux/capability.h
+++ b/include/linux/capability.h
@@ -208,6 +208,12 @@ static inline bool checkpoint_restore_ns_capable(struct user_namespace *ns)
ns_capable(ns, CAP_SYS_ADMIN);
}

+static inline bool checkpoint_restore_ns_capable_noaudit(struct user_namespace *ns)
+{
+ return ns_capable_noaudit(ns, CAP_CHECKPOINT_RESTORE) ||
+ ns_capable_noaudit(ns, CAP_SYS_ADMIN);
+}
+
/* audit system wants to get cap info from files as well */
int get_vfs_caps_from_disk(struct mnt_idmap *idmap,
const struct dentry *dentry,
diff --git a/include/linux/clk.h b/include/linux/clk.h
index 851a0f2cf42c8..9488c58d8e6af 100644
--- a/include/linux/clk.h
+++ b/include/linux/clk.h
@@ -228,6 +228,23 @@ int devm_clk_rate_exclusive_get(struct device *dev, struct clk *clk);
*/
void clk_rate_exclusive_put(struct clk *clk);

+/**
+ * clk_save_context - save clock context for poweroff
+ *
+ * Saves the context of the clock register for powerstates in which the
+ * contents of the registers will be lost. Occurs deep within the suspend
+ * code so locking is not necessary.
+ */
+int clk_save_context(void);
+
+/**
+ * clk_restore_context - restore clock context after poweroff
+ *
+ * This occurs with all clocks enabled. Occurs deep within the resume code
+ * so locking is not necessary.
+ */
+void clk_restore_context(void);
+
#else

static inline int clk_notifier_register(struct clk *clk,
@@ -293,6 +310,13 @@ static inline int devm_clk_rate_exclusive_get(struct device *dev, struct clk *cl

static inline void clk_rate_exclusive_put(struct clk *clk) {}

+static inline int clk_save_context(void)
+{
+ return 0;
+}
+
+static inline void clk_restore_context(void) {}
+
#endif

#ifdef CONFIG_HAVE_CLK_PREPARE
@@ -931,23 +955,6 @@ struct clk *clk_get_parent(struct clk *clk);
*/
struct clk *clk_get_sys(const char *dev_id, const char *con_id);

-/**
- * clk_save_context - save clock context for poweroff
- *
- * Saves the context of the clock register for powerstates in which the
- * contents of the registers will be lost. Occurs deep within the suspend
- * code so locking is not necessary.
- */
-int clk_save_context(void);
-
-/**
- * clk_restore_context - restore clock context after poweroff
- *
- * This occurs with all clocks enabled. Occurs deep within the resume code
- * so locking is not necessary.
- */
-void clk_restore_context(void);
-
#else /* !CONFIG_HAVE_CLK */

static inline struct clk *clk_get(struct device *dev, const char *id)
@@ -1127,13 +1134,6 @@ static inline struct clk *clk_get_sys(const char *dev_id, const char *con_id)
return NULL;
}

-static inline int clk_save_context(void)
-{
- return 0;
-}
-
-static inline void clk_restore_context(void) {}
-
#endif

/* clk_prepare_enable helps cases using clk_enable in non-atomic context. */
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index 8e41eff78f42d..bf775396d384e 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -297,6 +297,22 @@ struct ftrace_likely_data {
# define __no_kasan_or_inline __always_inline
#endif

+#ifdef CONFIG_KCSAN
+/*
+ * Type qualifier to mark variables where all data-racy accesses should be
+ * ignored by KCSAN. Note, the implementation simply marks these variables as
+ * volatile, since KCSAN will treat such accesses as "marked".
+ *
+ * Defined here because defining __data_racy as volatile for KCSAN objects only
+ * causes problems in BPF Type Format (BTF) generation since struct members
+ * of core kernel data structs will be volatile in some objects and not in
+ * others. Instead define it globally for KCSAN kernels.
+ */
+# define __data_racy volatile
+#else
+# define __data_racy
+#endif
+
#ifdef __SANITIZE_THREAD__
/*
* Clang still emits instrumentation for __tsan_func_{entry,exit}() and builtin
@@ -308,16 +324,9 @@ struct ftrace_likely_data {
* disable all instrumentation. See Kconfig.kcsan where this is mandatory.
*/
# define __no_kcsan __no_sanitize_thread __disable_sanitizer_instrumentation
-/*
- * Type qualifier to mark variables where all data-racy accesses should be
- * ignored by KCSAN. Note, the implementation simply marks these variables as
- * volatile, since KCSAN will treat such accesses as "marked".
- */
-# define __data_racy volatile
# define __no_sanitize_or_inline __no_kcsan notrace __maybe_unused
#else
# define __no_kcsan
-# define __data_racy
#endif

#ifdef __SANITIZE_MEMORY__
diff --git a/include/linux/cper.h b/include/linux/cper.h
index 3670b866ac119..951291fa50d4f 100644
--- a/include/linux/cper.h
+++ b/include/linux/cper.h
@@ -591,7 +591,8 @@ void cper_mem_err_pack(const struct cper_sec_mem_err *,
const char *cper_mem_err_unpack(struct trace_seq *,
struct cper_mem_err_compact *);
void cper_print_proc_arm(const char *pfx,
- const struct cper_sec_proc_arm *proc);
+ const struct cper_sec_proc_arm *proc,
+ u32 length);
void cper_print_proc_ia(const char *pfx,
const struct cper_sec_proc_ia *proc);
int cper_mem_err_location(struct cper_mem_err_compact *mem, char *msg);
diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
index fd5e84d0ec478..1ddc2a3ee463f 100644
--- a/include/linux/ftrace.h
+++ b/include/linux/ftrace.h
@@ -43,9 +43,8 @@ struct dyn_ftrace;

char *arch_ftrace_match_adjust(char *str, const char *search);

-#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL
-struct fgraph_ret_regs;
-unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs);
+#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS
+unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs);
#else
unsigned long ftrace_return_to_handler(unsigned long frame_pointer);
#endif
@@ -88,11 +87,13 @@ struct ftrace_hash;
defined(CONFIG_DYNAMIC_FTRACE)
int
ftrace_mod_address_lookup(unsigned long addr, unsigned long *size,
- unsigned long *off, char **modname, char *sym);
+ unsigned long *off, char **modname,
+ const unsigned char **modbuildid, char *sym);
#else
static inline int
ftrace_mod_address_lookup(unsigned long addr, unsigned long *size,
- unsigned long *off, char **modname, char *sym)
+ unsigned long *off, char **modname,
+ const unsigned char **modbuildid, char *sym)
{
return 0;
}
@@ -113,14 +114,61 @@ static inline int ftrace_mod_get_kallsym(unsigned int symnum, unsigned long *val

#ifdef CONFIG_FUNCTION_TRACER

-extern int ftrace_enabled;
+#include <linux/ftrace_regs.h>

-#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
+extern int ftrace_enabled;

+/**
+ * ftrace_regs - ftrace partial/optimal register set
+ *
+ * ftrace_regs represents a group of registers which is used at the
+ * function entry and exit. There are three types of registers.
+ *
+ * - Registers for passing the parameters to callee, including the stack
+ * pointer. (e.g. rcx, rdx, rdi, rsi, r8, r9 and rsp on x86_64)
+ * - Registers for passing the return values to caller.
+ * (e.g. rax and rdx on x86_64)
+ * - Registers for hooking the function call and return including the
+ * frame pointer (the frame pointer is architecture/config dependent)
+ * (e.g. rip, rbp and rsp for x86_64)
+ *
+ * Also, architecture dependent fields can be used for internal process.
+ * (e.g. orig_ax on x86_64)
+ *
+ * Basically, ftrace_regs stores the registers related to the context.
+ * On function entry, registers for function parameters and hooking the
+ * function call are stored, and on function exit, registers for function
+ * return value and frame pointers are stored.
+ *
+ * And also, it dpends on the context that which registers are restored
+ * from the ftrace_regs.
+ * On the function entry, those registers will be restored except for
+ * the stack pointer, so that user can change the function parameters
+ * and instruction pointer (e.g. live patching.)
+ * On the function exit, only registers which is used for return values
+ * are restored.
+ *
+ * NOTE: user *must not* access regs directly, only do it via APIs, because
+ * the member can be changed according to the architecture.
+ * This is why the structure is empty here, so that nothing accesses
+ * the ftrace_regs directly.
+ */
struct ftrace_regs {
- struct pt_regs regs;
+ /* Nothing to see here, use the accessor functions! */
};
-#define arch_ftrace_get_regs(fregs) (&(fregs)->regs)
+
+#define ftrace_regs_size() sizeof(struct __arch_ftrace_regs)
+
+#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
+/*
+ * Architectures that define HAVE_DYNAMIC_FTRACE_WITH_ARGS must define their own
+ * arch_ftrace_get_regs() where it only returns pt_regs *if* it is fully
+ * populated. It should return NULL otherwise.
+ */
+static inline struct pt_regs *arch_ftrace_get_regs(struct ftrace_regs *fregs)
+{
+ return &arch_ftrace_regs(fregs)->regs;
+}

/*
* ftrace_regs_set_instruction_pointer() is to be defined by the architecture
@@ -138,6 +186,62 @@ static __always_inline struct pt_regs *ftrace_get_regs(struct ftrace_regs *fregs
return arch_ftrace_get_regs(fregs);
}

+#if !defined(CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS) || \
+ defined(CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS)
+
+#ifndef arch_ftrace_partial_regs
+#define arch_ftrace_partial_regs(regs) do {} while (0)
+#endif
+
+static __always_inline struct pt_regs *
+ftrace_partial_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
+{
+ /*
+ * If CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS=y, ftrace_regs memory
+ * layout is including pt_regs. So always returns that address.
+ * Since arch_ftrace_get_regs() will check some members and may return
+ * NULL, we can not use it.
+ */
+ regs = &arch_ftrace_regs(fregs)->regs;
+
+ /* Allow arch specific updates to regs. */
+ arch_ftrace_partial_regs(regs);
+ return regs;
+}
+
+#endif /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS || CONFIG_HAVE_FTRACE_REGS_HAVING_PT_REGS */
+
+#ifdef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
+
+/*
+ * Please define arch dependent pt_regs which compatible to the
+ * perf_arch_fetch_caller_regs() but based on ftrace_regs.
+ * This requires
+ * - user_mode(_regs) returns false (always kernel mode).
+ * - able to use the _regs for stack trace.
+ */
+#ifndef arch_ftrace_fill_perf_regs
+/* As same as perf_arch_fetch_caller_regs(), do nothing by default */
+#define arch_ftrace_fill_perf_regs(fregs, _regs) do {} while (0)
+#endif
+
+static __always_inline struct pt_regs *
+ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
+{
+ arch_ftrace_fill_perf_regs(fregs, regs);
+ return regs;
+}
+
+#else /* !CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS */
+
+static __always_inline struct pt_regs *
+ftrace_fill_perf_regs(struct ftrace_regs *fregs, struct pt_regs *regs)
+{
+ return &arch_ftrace_regs(fregs)->regs;
+}
+
+#endif
+
/*
* When true, the ftrace_regs_{get,set}_*() functions may be used on fregs.
* Note: this can be true even when ftrace_get_regs() cannot provide a pt_regs.
@@ -150,23 +254,6 @@ static __always_inline bool ftrace_regs_has_args(struct ftrace_regs *fregs)
return ftrace_get_regs(fregs) != NULL;
}

-#ifndef CONFIG_HAVE_DYNAMIC_FTRACE_WITH_ARGS
-#define ftrace_regs_get_instruction_pointer(fregs) \
- instruction_pointer(ftrace_get_regs(fregs))
-#define ftrace_regs_get_argument(fregs, n) \
- regs_get_kernel_argument(ftrace_get_regs(fregs), n)
-#define ftrace_regs_get_stack_pointer(fregs) \
- kernel_stack_pointer(ftrace_get_regs(fregs))
-#define ftrace_regs_return_value(fregs) \
- regs_return_value(ftrace_get_regs(fregs))
-#define ftrace_regs_set_return_value(fregs, ret) \
- regs_set_return_value(ftrace_get_regs(fregs), ret)
-#define ftrace_override_function_with_return(fregs) \
- override_function_with_return(ftrace_get_regs(fregs))
-#define ftrace_regs_query_register_offset(name) \
- regs_query_register_offset(name)
-#endif
-
typedef void (*ftrace_func_t)(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs);

@@ -907,10 +994,17 @@ static inline bool is_ftrace_trampoline(unsigned long addr)

#ifdef CONFIG_FUNCTION_GRAPH_TRACER
#ifndef ftrace_graph_func
-#define ftrace_graph_func ftrace_stub
-#define FTRACE_OPS_GRAPH_STUB FTRACE_OPS_FL_STUB
+# define ftrace_graph_func ftrace_stub
+# define FTRACE_OPS_GRAPH_STUB FTRACE_OPS_FL_STUB
+/*
+ * The function graph is called every time the function tracer is called.
+ * It must always test the ops hash and cannot just directly call
+ * the handler.
+ */
+# define FGRAPH_NO_DIRECT 1
#else
-#define FTRACE_OPS_GRAPH_STUB 0
+# define FTRACE_OPS_GRAPH_STUB 0
+# define FGRAPH_NO_DIRECT 0
#endif
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */

diff --git a/include/linux/ftrace_regs.h b/include/linux/ftrace_regs.h
new file mode 100644
index 0000000000000..bbc1873ca6b8e
--- /dev/null
+++ b/include/linux/ftrace_regs.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_FTRACE_REGS_H
+#define _LINUX_FTRACE_REGS_H
+
+/*
+ * For archs that just copy pt_regs in ftrace regs, it can use this default.
+ * If an architecture does not use pt_regs, it must define all the below
+ * accessor functions.
+ */
+#ifndef HAVE_ARCH_FTRACE_REGS
+struct __arch_ftrace_regs {
+ struct pt_regs regs;
+};
+
+#define arch_ftrace_regs(fregs) ((struct __arch_ftrace_regs *)(fregs))
+
+struct ftrace_regs;
+
+#define ftrace_regs_get_instruction_pointer(fregs) \
+ instruction_pointer(&arch_ftrace_regs(fregs)->regs)
+#define ftrace_regs_get_argument(fregs, n) \
+ regs_get_kernel_argument(&arch_ftrace_regs(fregs)->regs, n)
+#define ftrace_regs_get_stack_pointer(fregs) \
+ kernel_stack_pointer(&arch_ftrace_regs(fregs)->regs)
+#define ftrace_regs_get_return_value(fregs) \
+ regs_return_value(&arch_ftrace_regs(fregs)->regs)
+#define ftrace_regs_set_return_value(fregs, ret) \
+ regs_set_return_value(&arch_ftrace_regs(fregs)->regs, ret)
+#define ftrace_override_function_with_return(fregs) \
+ override_function_with_return(&arch_ftrace_regs(fregs)->regs)
+#define ftrace_regs_query_register_offset(name) \
+ regs_query_register_offset(name)
+#define ftrace_regs_get_frame_pointer(fregs) \
+ frame_pointer(&arch_ftrace_regs(fregs)->regs)
+
+#endif /* HAVE_ARCH_FTRACE_REGS */
+
+#endif /* _LINUX_FTRACE_REGS_H */
diff --git a/include/linux/hw_random.h b/include/linux/hw_random.h
index b424555753b11..b77bc55a4cf35 100644
--- a/include/linux/hw_random.h
+++ b/include/linux/hw_random.h
@@ -15,6 +15,7 @@
#include <linux/completion.h>
#include <linux/kref.h>
#include <linux/types.h>
+#include <linux/workqueue_types.h>

/**
* struct hwrng - Hardware Random Number Generator driver
@@ -48,6 +49,7 @@ struct hwrng {
/* internal. */
struct list_head list;
struct kref ref;
+ struct work_struct cleanup_work;
struct completion cleanup_done;
struct completion dying;
};
diff --git a/include/linux/inetdevice.h b/include/linux/inetdevice.h
index cb5280e6cc219..9c0f263e2cd28 100644
--- a/include/linux/inetdevice.h
+++ b/include/linux/inetdevice.h
@@ -38,11 +38,11 @@ struct in_device {
struct ip_mc_list *mc_tomb;
unsigned long mr_v1_seen;
unsigned long mr_v2_seen;
- unsigned long mr_maxdelay;
unsigned long mr_qi; /* Query Interval */
unsigned long mr_qri; /* Query Response Interval */
unsigned char mr_qrv; /* Query Robustness Variable */
unsigned char mr_gq_running;
+ u32 mr_maxdelay;
u32 mr_ifc_count;
struct timer_list mr_gq_timer; /* general query timer */
struct timer_list mr_ifc_timer; /* interface change timer */
diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index b378fbf885ce3..2125d4d9b44f2 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -215,7 +215,7 @@ static inline int __must_check
devm_request_irq(struct device *dev, unsigned int irq, irq_handler_t handler,
unsigned long irqflags, const char *devname, void *dev_id)
{
- return devm_request_threaded_irq(dev, irq, handler, NULL, irqflags,
+ return devm_request_threaded_irq(dev, irq, handler, NULL, irqflags | IRQF_COND_ONESHOT,
devname, dev_id);
}

diff --git a/include/linux/mfd/wm8350/core.h b/include/linux/mfd/wm8350/core.h
index a3241e4d75486..4816d4f472101 100644
--- a/include/linux/mfd/wm8350/core.h
+++ b/include/linux/mfd/wm8350/core.h
@@ -663,7 +663,7 @@ static inline int wm8350_register_irq(struct wm8350 *wm8350, int irq,
return -ENODEV;

return request_threaded_irq(irq + wm8350->irq_base, NULL,
- handler, flags, name, data);
+ handler, flags | IRQF_ONESHOT, name, data);
}

static inline void wm8350_free_irq(struct wm8350 *wm8350, int irq, void *data)
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 9a8eb644f6707..2e3324578f2ea 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -1297,12 +1297,12 @@ static inline bool mlx5_rl_is_supported(struct mlx5_core_dev *dev)
static inline int mlx5_core_is_mp_slave(struct mlx5_core_dev *dev)
{
return MLX5_CAP_GEN(dev, affiliate_nic_vport_criteria) &&
- MLX5_CAP_GEN(dev, num_vhca_ports) <= 1;
+ MLX5_CAP_GEN_MAX(dev, num_vhca_ports) <= 1;
}

static inline int mlx5_core_is_mp_master(struct mlx5_core_dev *dev)
{
- return MLX5_CAP_GEN(dev, num_vhca_ports) > 1;
+ return MLX5_CAP_GEN_MAX(dev, num_vhca_ports) > 1;
}

static inline int mlx5_core_mp_enabled(struct mlx5_core_dev *dev)
diff --git a/include/linux/module.h b/include/linux/module.h
index 7886217c99881..1cb6f80e1b485 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -750,6 +750,15 @@ static inline void __module_get(struct module *module)
__mod ? __mod->name : "kernel"; \
})

+static inline const unsigned char *module_buildid(struct module *mod)
+{
+#ifdef CONFIG_STACKTRACE_BUILD_ID
+ return mod->build_id;
+#else
+ return NULL;
+#endif
+}
+
/* Dereference module function descriptor */
void *dereference_module_function_descriptor(struct module *mod, void *ptr);

diff --git a/include/linux/mtd/spinand.h b/include/linux/mtd/spinand.h
index 702e5fb13dae7..4bf33cefbae5f 100644
--- a/include/linux/mtd/spinand.h
+++ b/include/linux/mtd/spinand.h
@@ -195,7 +195,7 @@ struct spinand_device;

/**
* struct spinand_id - SPI NAND id structure
- * @data: buffer containing the id bytes. Currently 4 bytes large, but can
+ * @data: buffer containing the id bytes. Currently 5 bytes large, but can
* be extended if required
* @len: ID length
*/
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index c395b3c5c05cf..ba1b9617f724a 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -3134,6 +3134,7 @@
#define PCI_DEVICE_ID_INTEL_HDA_CML_S 0xa3f0
#define PCI_DEVICE_ID_INTEL_HDA_LNL_P 0xa828
#define PCI_DEVICE_ID_INTEL_S21152BB 0xb152
+#define PCI_DEVICE_ID_INTEL_HDA_NVL 0xd328
#define PCI_DEVICE_ID_INTEL_HDA_BMG 0xe2f7
#define PCI_DEVICE_ID_INTEL_HDA_PTL_H 0xe328
#define PCI_DEVICE_ID_INTEL_HDA_PTL 0xe428
diff --git a/include/linux/psp.h b/include/linux/psp.h
index 92e60aeef21e1..b337dcce1e991 100644
--- a/include/linux/psp.h
+++ b/include/linux/psp.h
@@ -18,6 +18,7 @@
* and should include an appropriate local definition in their source file.
*/
#define PSP_CMDRESP_STS GENMASK(15, 0)
+#define PSP_TEE_STS_RING_BUSY 0x0000000d /* Ring already initialized */
#define PSP_CMDRESP_CMD GENMASK(23, 16)
#define PSP_CMDRESP_RESERVED GENMASK(29, 24)
#define PSP_CMDRESP_RECOVERY BIT(30)
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
index 2e26a054d260c..4344724a97821 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -1146,6 +1146,38 @@ static inline struct dst_entry *skb_dst(const struct sk_buff *skb)
return (struct dst_entry *)(skb->_skb_refdst & SKB_DST_PTRMASK);
}

+/**
+ * skb_dstref_steal() - return current dst_entry value and clear it
+ * @skb: buffer
+ *
+ * Resets skb dst_entry without adjusting its reference count. Useful in
+ * cases where dst_entry needs to be temporarily reset and restored.
+ * Note that the returned value cannot be used directly because it
+ * might contain SKB_DST_NOREF bit.
+ *
+ * When in doubt, prefer skb_dst_drop() over skb_dstref_steal() to correctly
+ * handle dst_entry reference counting.
+ *
+ * Returns: original skb dst_entry.
+ */
+static inline unsigned long skb_dstref_steal(struct sk_buff *skb)
+{
+ unsigned long refdst = skb->_skb_refdst;
+
+ skb->_skb_refdst = 0;
+ return refdst;
+}
+
+/**
+ * skb_dstref_restore() - restore skb dst_entry removed via skb_dstref_steal()
+ * @skb: buffer
+ * @refdst: dst entry from a call to skb_dstref_steal()
+ */
+static inline void skb_dstref_restore(struct sk_buff *skb, unsigned long refdst)
+{
+ skb->_skb_refdst = refdst;
+}
+
/**
* skb_dst_set - sets skb dst
* @skb: buffer
diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
index 0b9095a281b89..5581e7263c504 100644
--- a/include/linux/skmsg.h
+++ b/include/linux/skmsg.h
@@ -97,6 +97,8 @@ struct sk_psock {
struct sk_buff_head ingress_skb;
struct list_head ingress_msg;
spinlock_t ingress_lock;
+ /** @msg_tot_len: Total bytes queued in ingress_msg list. */
+ u32 msg_tot_len;
unsigned long state;
struct list_head link;
spinlock_t link_lock;
@@ -141,6 +143,8 @@ int sk_msg_memcopy_from_iter(struct sock *sk, struct iov_iter *from,
struct sk_msg *msg, u32 bytes);
int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
int len, int flags);
+int __sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ int len, int flags, int *copied_from_self);
bool sk_msg_is_readable(struct sock *sk);

static inline void sk_msg_check_to_free(struct sk_msg *msg, u32 i, u32 bytes)
@@ -319,6 +323,27 @@ static inline void sock_drop(struct sock *sk, struct sk_buff *skb)
kfree_skb(skb);
}

+static inline u32 sk_psock_get_msg_len_nolock(struct sk_psock *psock)
+{
+ /* Used by ioctl to read msg_tot_len only; lock-free for performance */
+ return READ_ONCE(psock->msg_tot_len);
+}
+
+static inline void sk_psock_msg_len_add_locked(struct sk_psock *psock, int diff)
+{
+ /* Use WRITE_ONCE to ensure correct read in sk_psock_get_msg_len_nolock().
+ * ingress_lock should be held to prevent concurrent updates to msg_tot_len
+ */
+ WRITE_ONCE(psock->msg_tot_len, psock->msg_tot_len + diff);
+}
+
+static inline void sk_psock_msg_len_add(struct sk_psock *psock, int diff)
+{
+ spin_lock_bh(&psock->ingress_lock);
+ sk_psock_msg_len_add_locked(psock, diff);
+ spin_unlock_bh(&psock->ingress_lock);
+}
+
static inline bool sk_psock_queue_msg(struct sk_psock *psock,
struct sk_msg *msg)
{
@@ -327,6 +352,7 @@ static inline bool sk_psock_queue_msg(struct sk_psock *psock,
spin_lock_bh(&psock->ingress_lock);
if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
list_add_tail(&msg->list, &psock->ingress_msg);
+ sk_psock_msg_len_add_locked(psock, msg->sg.size);
ret = true;
} else {
sk_msg_free(psock->sk, msg);
@@ -343,18 +369,25 @@ static inline struct sk_msg *sk_psock_dequeue_msg(struct sk_psock *psock)

spin_lock_bh(&psock->ingress_lock);
msg = list_first_entry_or_null(&psock->ingress_msg, struct sk_msg, list);
- if (msg)
+ if (msg) {
list_del(&msg->list);
+ sk_psock_msg_len_add_locked(psock, -msg->sg.size);
+ }
spin_unlock_bh(&psock->ingress_lock);
return msg;
}

+static inline struct sk_msg *sk_psock_peek_msg_locked(struct sk_psock *psock)
+{
+ return list_first_entry_or_null(&psock->ingress_msg, struct sk_msg, list);
+}
+
static inline struct sk_msg *sk_psock_peek_msg(struct sk_psock *psock)
{
struct sk_msg *msg;

spin_lock_bh(&psock->ingress_lock);
- msg = list_first_entry_or_null(&psock->ingress_msg, struct sk_msg, list);
+ msg = sk_psock_peek_msg_locked(psock);
spin_unlock_bh(&psock->ingress_lock);
return msg;
}
@@ -521,6 +554,39 @@ static inline bool sk_psock_strp_enabled(struct sk_psock *psock)
return !!psock->saved_data_ready;
}

+/* for tcp only, sk is locked */
+static inline ssize_t sk_psock_msg_inq(struct sock *sk)
+{
+ struct sk_psock *psock;
+ ssize_t inq = 0;
+
+ psock = sk_psock_get(sk);
+ if (likely(psock)) {
+ inq = sk_psock_get_msg_len_nolock(psock);
+ sk_psock_put(sk, psock);
+ }
+ return inq;
+}
+
+/* for udp only, sk is not locked */
+static inline ssize_t sk_msg_first_len(struct sock *sk)
+{
+ struct sk_psock *psock;
+ struct sk_msg *msg;
+ ssize_t inq = 0;
+
+ psock = sk_psock_get(sk);
+ if (likely(psock)) {
+ spin_lock_bh(&psock->ingress_lock);
+ msg = sk_psock_peek_msg_locked(psock);
+ if (msg)
+ inq = msg->sg.size;
+ spin_unlock_bh(&psock->ingress_lock);
+ sk_psock_put(sk, psock);
+ }
+ return inq;
+}
+
#if IS_ENABLED(CONFIG_NET_SOCK_MSG)

#define BPF_F_STRPARSER (1UL << 1)
diff --git a/include/linux/sunrpc/xdrgen/_builtins.h b/include/linux/sunrpc/xdrgen/_builtins.h
index 66ca3ece951ab..a5ab75d2db044 100644
--- a/include/linux/sunrpc/xdrgen/_builtins.h
+++ b/include/linux/sunrpc/xdrgen/_builtins.h
@@ -188,12 +188,10 @@ xdrgen_decode_string(struct xdr_stream *xdr, string *ptr, u32 maxlen)
return false;
if (unlikely(maxlen && len > maxlen))
return false;
- if (len != 0) {
- p = xdr_inline_decode(xdr, len);
- if (unlikely(!p))
- return false;
- ptr->data = (unsigned char *)p;
- }
+ p = xdr_inline_decode(xdr, len);
+ if (unlikely(!p))
+ return false;
+ ptr->data = (unsigned char *)p;
ptr->len = len;
return true;
}
@@ -219,12 +217,10 @@ xdrgen_decode_opaque(struct xdr_stream *xdr, opaque *ptr, u32 maxlen)
return false;
if (unlikely(maxlen && len > maxlen))
return false;
- if (len != 0) {
- p = xdr_inline_decode(xdr, len);
- if (unlikely(!p))
- return false;
- ptr->data = (u8 *)p;
- }
+ p = xdr_inline_decode(xdr, len);
+ if (unlikely(!p))
+ return false;
+ ptr->data = (u8 *)p;
ptr->len = len;
return true;
}
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index fcf5a64d5cfe2..54ba231ded519 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -701,6 +701,11 @@ static inline void hist_poll_wakeup(void)

#define hist_poll_wait(file, wait) \
poll_wait(file, &hist_poll_wq, wait)
+
+#else
+static inline void hist_poll_wakeup(void)
+{
+}
#endif

#define __TRACE_EVENT_FLAGS(name, value) \
diff --git a/include/linux/u64_stats_sync.h b/include/linux/u64_stats_sync.h
index 457879938fc19..3366090a86bd2 100644
--- a/include/linux/u64_stats_sync.h
+++ b/include/linux/u64_stats_sync.h
@@ -89,6 +89,11 @@ static inline void u64_stats_add(u64_stats_t *p, unsigned long val)
local64_add(val, &p->v);
}

+static inline void u64_stats_sub(u64_stats_t *p, s64 val)
+{
+ local64_sub(val, &p->v);
+}
+
static inline void u64_stats_inc(u64_stats_t *p)
{
local64_inc(&p->v);
@@ -130,6 +135,11 @@ static inline void u64_stats_add(u64_stats_t *p, unsigned long val)
p->v += val;
}

+static inline void u64_stats_sub(u64_stats_t *p, s64 val)
+{
+ p->v -= val;
+}
+
static inline void u64_stats_inc(u64_stats_t *p)
{
p->v++;
diff --git a/include/media/dvb_vb2.h b/include/media/dvb_vb2.h
index 8cb88452cd6c2..0fbbfc65157e6 100644
--- a/include/media/dvb_vb2.h
+++ b/include/media/dvb_vb2.h
@@ -124,7 +124,7 @@ static inline int dvb_vb2_release(struct dvb_vb2_ctx *ctx)
return 0;
};
#define dvb_vb2_is_streaming(ctx) (0)
-#define dvb_vb2_fill_buffer(ctx, file, wait, flags) (0)
+#define dvb_vb2_fill_buffer(ctx, file, wait, flags, flush) (0)

static inline __poll_t dvb_vb2_poll(struct dvb_vb2_ctx *ctx,
struct file *file,
@@ -166,10 +166,12 @@ int dvb_vb2_is_streaming(struct dvb_vb2_ctx *ctx);
* @buffer_flags:
* pointer to buffer flags as defined by &enum dmx_buffer_flags.
* can be NULL.
+ * @flush: flush the buffer, even if it isn't full.
*/
int dvb_vb2_fill_buffer(struct dvb_vb2_ctx *ctx,
const unsigned char *src, int len,
- enum dmx_buffer_flags *buffer_flags);
+ enum dmx_buffer_flags *buffer_flags,
+ bool flush);

/**
* dvb_vb2_poll - Wrapper to vb2_core_streamon() for Digital TV
diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h
index 9189354c568f4..b233779824c20 100644
--- a/include/net/bluetooth/l2cap.h
+++ b/include/net/bluetooth/l2cap.h
@@ -284,9 +284,9 @@ struct l2cap_conn_rsp {
#define L2CAP_CR_LE_BAD_KEY_SIZE 0x0007
#define L2CAP_CR_LE_ENCRYPTION 0x0008
#define L2CAP_CR_LE_INVALID_SCID 0x0009
-#define L2CAP_CR_LE_SCID_IN_USE 0X000A
-#define L2CAP_CR_LE_UNACCEPT_PARAMS 0X000B
-#define L2CAP_CR_LE_INVALID_PARAMS 0X000C
+#define L2CAP_CR_LE_SCID_IN_USE 0x000A
+#define L2CAP_CR_LE_UNACCEPT_PARAMS 0x000B
+#define L2CAP_CR_LE_INVALID_PARAMS 0x000C

/* connect/create channel status */
#define L2CAP_CS_NO_INFO 0x0000
@@ -493,6 +493,8 @@ struct l2cap_ecred_reconf_req {
#define L2CAP_RECONF_SUCCESS 0x0000
#define L2CAP_RECONF_INVALID_MTU 0x0001
#define L2CAP_RECONF_INVALID_MPS 0x0002
+#define L2CAP_RECONF_INVALID_CID 0x0003
+#define L2CAP_RECONF_INVALID_PARAMS 0x0004

struct l2cap_ecred_reconf_rsp {
__le16 result;
diff --git a/include/net/ioam6.h b/include/net/ioam6.h
index 2cbbee6e806aa..a75912fe247e6 100644
--- a/include/net/ioam6.h
+++ b/include/net/ioam6.h
@@ -60,6 +60,8 @@ void ioam6_fill_trace_data(struct sk_buff *skb,
struct ioam6_trace_hdr *trace,
bool is_input);

+u8 ioam6_trace_compute_nodelen(u32 trace_type);
+
int ioam6_init(void);
void ioam6_exit(void);

diff --git a/include/net/ipv6.h b/include/net/ipv6.h
index 6d52b5584d2fb..e843eb7efbccd 100644
--- a/include/net/ipv6.h
+++ b/include/net/ipv6.h
@@ -1005,11 +1005,11 @@ static inline int ip6_default_np_autolabel(struct net *net)
#if IS_ENABLED(CONFIG_IPV6)
static inline int ip6_multipath_hash_policy(const struct net *net)
{
- return net->ipv6.sysctl.multipath_hash_policy;
+ return READ_ONCE(net->ipv6.sysctl.multipath_hash_policy);
}
static inline u32 ip6_multipath_hash_fields(const struct net *net)
{
- return net->ipv6.sysctl.multipath_hash_fields;
+ return READ_ONCE(net->ipv6.sysctl.multipath_hash_fields);
}
#else
static inline int ip6_multipath_hash_policy(const struct net *net)
@@ -1275,12 +1275,15 @@ int ipv6_sock_mc_drop(struct sock *sk, int ifindex,

static inline int ip6_sock_set_v6only(struct sock *sk)
{
- if (inet_sk(sk)->inet_num)
- return -EINVAL;
+ int ret = 0;
+
lock_sock(sk);
- sk->sk_ipv6only = true;
+ if (inet_sk(sk)->inet_num)
+ ret = -EINVAL;
+ else
+ sk->sk_ipv6only = true;
release_sock(sk);
- return 0;
+ return ret;
}

static inline void ip6_sock_set_recverr(struct sock *sk)
diff --git a/include/net/netfilter/nf_conntrack_count.h b/include/net/netfilter/nf_conntrack_count.h
index 52a06de41aa0f..cf0166520cf33 100644
--- a/include/net/netfilter/nf_conntrack_count.h
+++ b/include/net/netfilter/nf_conntrack_count.h
@@ -13,6 +13,7 @@ struct nf_conncount_list {
u32 last_gc; /* jiffies at most recent gc */
struct list_head head; /* connections with the same filtering key */
unsigned int count; /* length of list */
+ unsigned int last_gc_count; /* length of list at most recent gc */
};

struct nf_conncount_data *nf_conncount_init(struct net *net, unsigned int keylen);
diff --git a/include/net/netfilter/nf_queue.h b/include/net/netfilter/nf_queue.h
index 4aeffddb75861..45eb26b2e95b3 100644
--- a/include/net/netfilter/nf_queue.h
+++ b/include/net/netfilter/nf_queue.h
@@ -6,11 +6,13 @@
#include <linux/ipv6.h>
#include <linux/jhash.h>
#include <linux/netfilter.h>
+#include <linux/rhashtable-types.h>
#include <linux/skbuff.h>

/* Each queued (to userspace) skbuff has one of these. */
struct nf_queue_entry {
struct list_head list;
+ struct rhash_head hash_node;
struct sk_buff *skb;
unsigned int id;
unsigned int hook_index; /* index in hook_entries->hook[] */
@@ -19,7 +21,9 @@ struct nf_queue_entry {
struct net_device *physout;
#endif
struct nf_hook_state state;
+ bool nf_ct_is_unconfirmed;
u16 size; /* sizeof(entry) + saved route keys */
+ u16 queue_num;

/* extra space to store route keys */
};
diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
index 276f622f35168..a15e9366175ec 100644
--- a/include/net/netns/ipv4.h
+++ b/include/net/netns/ipv4.h
@@ -80,6 +80,12 @@ struct netns_ipv4 {
int sysctl_tcp_rmem[3];
__cacheline_group_end(netns_ipv4_read_rx);

+ /* ICMP rate limiter hot cache line. */
+ __cacheline_group_begin_aligned(icmp);
+ atomic_t icmp_global_credit;
+ u32 icmp_global_stamp;
+ __cacheline_group_end_aligned(icmp);
+
struct inet_timewait_death_row tcp_death_row;
struct udp_table *udp_table;

@@ -124,8 +130,7 @@ struct netns_ipv4 {
int sysctl_icmp_ratemask;
int sysctl_icmp_msgs_per_sec;
int sysctl_icmp_msgs_burst;
- atomic_t icmp_global_credit;
- u32 icmp_global_stamp;
+
u32 ip_rt_min_pmtu;
int ip_rt_mtu_expires;
int ip_rt_min_advmss;
diff --git a/include/rdma/rw.h b/include/rdma/rw.h
index d606cac482338..9a8f4b76ce588 100644
--- a/include/rdma/rw.h
+++ b/include/rdma/rw.h
@@ -66,6 +66,8 @@ int rdma_rw_ctx_post(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u32 port_num,

unsigned int rdma_rw_mr_factor(struct ib_device *device, u32 port_num,
unsigned int maxpages);
+unsigned int rdma_rw_max_send_wr(struct ib_device *dev, u32 port_num,
+ unsigned int max_rdma_ctxs, u32 create_flags);
void rdma_rw_init_qp(struct ib_device *dev, struct ib_qp_init_attr *attr);
int rdma_rw_init_mrs(struct ib_qp *qp, struct ib_qp_init_attr *attr);
void rdma_rw_cleanup_mrs(struct ib_qp *qp);
diff --git a/include/uapi/linux/hyperv.h b/include/uapi/linux/hyperv.h
index aaa502a7bff46..1749b35ab2c21 100644
--- a/include/uapi/linux/hyperv.h
+++ b/include/uapi/linux/hyperv.h
@@ -362,7 +362,7 @@ struct hv_kvp_exchg_msg_value {
__u8 value[HV_KVP_EXCHANGE_MAX_VALUE_SIZE];
__u32 value_u32;
__u64 value_u64;
- };
+ } __attribute__((packed));
} __attribute__((packed));

struct hv_kvp_msg_enumerate {
diff --git a/include/uapi/linux/netfilter_bridge.h b/include/uapi/linux/netfilter_bridge.h
index 1610fdbab98df..ad520d3e9df8f 100644
--- a/include/uapi/linux/netfilter_bridge.h
+++ b/include/uapi/linux/netfilter_bridge.h
@@ -5,6 +5,10 @@
/* bridge-specific defines for netfilter.
*/

+#ifndef __KERNEL__
+#include <netinet/if_ether.h> /* for __UAPI_DEF_ETHHDR if defined */
+#endif
+
#include <linux/in.h>
#include <linux/netfilter.h>
#include <linux/if_ether.h>
diff --git a/include/uapi/linux/nfs.h b/include/uapi/linux/nfs.h
index 71c7196d32817..e629c49535345 100644
--- a/include/uapi/linux/nfs.h
+++ b/include/uapi/linux/nfs.h
@@ -55,7 +55,7 @@
NFSERR_NODEV = 19, /* v2 v3 v4 */
NFSERR_NOTDIR = 20, /* v2 v3 v4 */
NFSERR_ISDIR = 21, /* v2 v3 v4 */
- NFSERR_INVAL = 22, /* v2 v3 v4 */
+ NFSERR_INVAL = 22, /* v3 v4 */
NFSERR_FBIG = 27, /* v2 v3 v4 */
NFSERR_NOSPC = 28, /* v2 v3 v4 */
NFSERR_ROFS = 30, /* v2 v3 v4 */
diff --git a/include/uapi/linux/vbox_vmmdev_types.h b/include/uapi/linux/vbox_vmmdev_types.h
index 6073858d52a2e..11f3627c3729b 100644
--- a/include/uapi/linux/vbox_vmmdev_types.h
+++ b/include/uapi/linux/vbox_vmmdev_types.h
@@ -236,7 +236,7 @@ struct vmmdev_hgcm_function_parameter32 {
/** Relative to the request header. */
__u32 offset;
} page_list;
- } u;
+ } __packed u;
} __packed;
VMMDEV_ASSERT_SIZE(vmmdev_hgcm_function_parameter32, 4 + 8);

@@ -251,7 +251,7 @@ struct vmmdev_hgcm_function_parameter64 {
union {
__u64 phys_addr;
__u64 linear_addr;
- } u;
+ } __packed u;
} __packed pointer;
struct {
/** Size of the buffer described by the page list. */
diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
index bdc5564b16fba..e68ce42eaff4d 100644
--- a/include/ufs/ufshcd.h
+++ b/include/ufs/ufshcd.h
@@ -1356,17 +1356,13 @@ static inline void *ufshcd_get_variant(struct ufs_hba *hba)
return hba->priv;
}

-#ifdef CONFIG_PM
extern int ufshcd_runtime_suspend(struct device *dev);
extern int ufshcd_runtime_resume(struct device *dev);
-#endif
-#ifdef CONFIG_PM_SLEEP
extern int ufshcd_system_suspend(struct device *dev);
extern int ufshcd_system_resume(struct device *dev);
extern int ufshcd_system_freeze(struct device *dev);
extern int ufshcd_system_thaw(struct device *dev);
extern int ufshcd_system_restore(struct device *dev);
-#endif

extern int ufshcd_dme_configure_adapt(struct ufs_hba *hba,
int agreed_gear,
diff --git a/include/xen/xen.h b/include/xen/xen.h
index a1e5b3f18d69f..86fe96fe51834 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -62,11 +62,13 @@ extern u64 xen_saved_max_mem_size;
#endif

#ifdef CONFIG_XEN_UNPOPULATED_ALLOC
+extern unsigned long xen_unpopulated_pages;
int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
#include <linux/ioport.h>
int arch_xen_unpopulated_init(struct resource **res);
#else
+#define xen_unpopulated_pages 0UL
#include <xen/balloon.h>
static inline int xen_alloc_unpopulated_pages(unsigned int nr_pages,
struct page **pages)
diff --git a/io_uring/cancel.h b/io_uring/cancel.h
index b33995e00ba90..da13a1d820622 100644
--- a/io_uring/cancel.h
+++ b/io_uring/cancel.h
@@ -6,10 +6,8 @@

struct io_cancel_data {
struct io_ring_ctx *ctx;
- union {
- u64 data;
- struct file *file;
- };
+ u64 data;
+ struct file *file;
u8 opcode;
u32 flags;
int seq;
diff --git a/io_uring/filetable.c b/io_uring/filetable.c
index 6183d61c7222d..72423e27e5e1d 100644
--- a/io_uring/filetable.c
+++ b/io_uring/filetable.c
@@ -22,6 +22,10 @@ static int io_file_bitmap_get(struct io_ring_ctx *ctx)
if (!table->bitmap)
return -ENFILE;

+ if (table->alloc_hint < ctx->file_alloc_start ||
+ table->alloc_hint >= ctx->file_alloc_end)
+ table->alloc_hint = ctx->file_alloc_start;
+
do {
ret = find_next_zero_bit(table->bitmap, nr, table->alloc_hint);
if (ret != nr)
diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
index 99b0b1ba0fe22..5c60442e67028 100644
--- a/io_uring/io_uring.c
+++ b/io_uring/io_uring.c
@@ -3312,7 +3312,11 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit,

ctx = file->private_data;
ret = -EBADFD;
- if (unlikely(ctx->flags & IORING_SETUP_R_DISABLED))
+ /*
+ * Keep IORING_SETUP_R_DISABLED check before submitter_task load
+ * in io_uring_add_tctx_node() -> __io_uring_add_tctx_node_from_submit()
+ */
+ if (unlikely(smp_load_acquire(&ctx->flags) & IORING_SETUP_R_DISABLED))
goto out;

/*
diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c
index 97708e5132bc4..3cf59d8c58073 100644
--- a/io_uring/msg_ring.c
+++ b/io_uring/msg_ring.c
@@ -126,7 +126,11 @@ static int io_msg_ring_data(struct io_kiocb *req, unsigned int issue_flags)
return -EINVAL;
if (!(msg->flags & IORING_MSG_RING_FLAGS_PASS) && msg->dst_fd)
return -EINVAL;
- if (target_ctx->flags & IORING_SETUP_R_DISABLED)
+ /*
+ * Keep IORING_SETUP_R_DISABLED check before submitter_task load
+ * in io_msg_data_remote() -> io_msg_remote_post()
+ */
+ if (smp_load_acquire(&target_ctx->flags) & IORING_SETUP_R_DISABLED)
return -EBADFD;

if (io_msg_need_remote(target_ctx))
@@ -237,7 +241,11 @@ static int io_msg_send_fd(struct io_kiocb *req, unsigned int issue_flags)
return -EINVAL;
if (target_ctx == ctx)
return -EINVAL;
- if (target_ctx->flags & IORING_SETUP_R_DISABLED)
+ /*
+ * Keep IORING_SETUP_R_DISABLED check before submitter_task load
+ * in io_msg_fd_remote()
+ */
+ if (smp_load_acquire(&target_ctx->flags) & IORING_SETUP_R_DISABLED)
return -EBADFD;
if (!src_file) {
src_file = io_msg_grab_file(req, issue_flags);
diff --git a/io_uring/net.c b/io_uring/net.c
index b7c93765fcff8..56a0acc0894c8 100644
--- a/io_uring/net.c
+++ b/io_uring/net.c
@@ -522,7 +522,11 @@ static inline bool io_send_finish(struct io_kiocb *req, int *ret,

cflags = io_put_kbufs(req, *ret, io_bundle_nbufs(kmsg, *ret), issue_flags);

- if (bundle_finished || req->flags & REQ_F_BL_EMPTY)
+ /*
+ * Don't start new bundles if the buffer list is empty, or if the
+ * current operation needed to go through polling to complete.
+ */
+ if (bundle_finished || req->flags & (REQ_F_BL_EMPTY | REQ_F_POLLED))
goto finish;

/*
diff --git a/io_uring/register.c b/io_uring/register.c
index a325b493ae121..f700ddf1f1d1f 100644
--- a/io_uring/register.c
+++ b/io_uring/register.c
@@ -190,7 +190,8 @@ static int io_register_enable_rings(struct io_ring_ctx *ctx)
if (ctx->restrictions.registered)
ctx->restricted = 1;

- ctx->flags &= ~IORING_SETUP_R_DISABLED;
+ /* Keep submitter_task store before clearing IORING_SETUP_R_DISABLED */
+ smp_store_release(&ctx->flags, ctx->flags & ~IORING_SETUP_R_DISABLED);
if (ctx->sq_data && wq_has_sleeper(&ctx->sq_data->wait))
wake_up(&ctx->sq_data->wait);
return 0;
diff --git a/io_uring/sync.c b/io_uring/sync.c
index 255f68c37e55c..27bd0a26500bc 100644
--- a/io_uring/sync.c
+++ b/io_uring/sync.c
@@ -62,6 +62,8 @@ int io_fsync_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
return -EINVAL;

sync->off = READ_ONCE(sqe->off);
+ if (sync->off < 0)
+ return -EINVAL;
sync->len = READ_ONCE(sqe->len);
req->flags |= REQ_F_FORCE_ASYNC;
return 0;
diff --git a/ipc/ipc_sysctl.c b/ipc/ipc_sysctl.c
index 54318e0b45578..61ce52ce30530 100644
--- a/ipc/ipc_sysctl.c
+++ b/ipc/ipc_sysctl.c
@@ -214,7 +214,7 @@ static int ipc_permissions(struct ctl_table_header *head, const struct ctl_table
if (((table->data == &ns->ids[IPC_SEM_IDS].next_id) ||
(table->data == &ns->ids[IPC_MSG_IDS].next_id) ||
(table->data == &ns->ids[IPC_SHM_IDS].next_id)) &&
- checkpoint_restore_ns_capable(ns->user_ns))
+ checkpoint_restore_ns_capable_noaudit(ns->user_ns))
mode = 0666;
else
#endif
diff --git a/kernel/bpf/crypto.c b/kernel/bpf/crypto.c
index 83c4d9943084b..1d024fe7248ac 100644
--- a/kernel/bpf/crypto.c
+++ b/kernel/bpf/crypto.c
@@ -261,6 +261,12 @@ __bpf_kfunc void bpf_crypto_ctx_release(struct bpf_crypto_ctx *ctx)
call_rcu(&ctx->rcu, crypto_free_cb);
}

+__bpf_kfunc void bpf_crypto_ctx_release_dtor(void *ctx)
+{
+ bpf_crypto_ctx_release(ctx);
+}
+CFI_NOSEAL(bpf_crypto_ctx_release_dtor);
+
static int bpf_crypto_crypt(const struct bpf_crypto_ctx *ctx,
const struct bpf_dynptr_kern *src,
const struct bpf_dynptr_kern *dst,
@@ -368,7 +374,7 @@ static const struct btf_kfunc_id_set crypt_kfunc_set = {

BTF_ID_LIST(bpf_crypto_dtor_ids)
BTF_ID(struct, bpf_crypto_ctx)
-BTF_ID(func, bpf_crypto_ctx_release)
+BTF_ID(func, bpf_crypto_ctx_release_dtor)

static int __init crypto_kfunc_init(void)
{
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 7b75a2dd8cb8f..a9e2380327b45 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -13999,21 +13999,17 @@ static void __scalar64_min_max_lsh(struct bpf_reg_state *dst_reg,
u64 umin_val, u64 umax_val)
{
/* Special case <<32 because it is a common compiler pattern to sign
- * extend subreg by doing <<32 s>>32. In this case if 32bit bounds are
- * positive we know this shift will also be positive so we can track
- * bounds correctly. Otherwise we lose all sign bit information except
- * what we can pick up from var_off. Perhaps we can generalize this
- * later to shifts of any length.
+ * extend subreg by doing <<32 s>>32. smin/smax assignments are correct
+ * because s32 bounds don't flip sign when shifting to the left by
+ * 32bits.
*/
- if (umin_val == 32 && umax_val == 32 && dst_reg->s32_max_value >= 0)
+ if (umin_val == 32 && umax_val == 32) {
dst_reg->smax_value = (s64)dst_reg->s32_max_value << 32;
- else
- dst_reg->smax_value = S64_MAX;
-
- if (umin_val == 32 && umax_val == 32 && dst_reg->s32_min_value >= 0)
dst_reg->smin_value = (s64)dst_reg->s32_min_value << 32;
- else
+ } else {
+ dst_reg->smax_value = S64_MAX;
dst_reg->smin_value = S64_MIN;
+ }

/* If we might shift our top bit out, then we know nothing */
if (dst_reg->umax_value > 1ULL << (63 - umax_val)) {
@@ -14196,6 +14192,35 @@ static bool is_safe_to_compute_dst_reg_range(struct bpf_insn *insn,
}
}

+static int maybe_fork_scalars(struct bpf_verifier_env *env, struct bpf_insn *insn,
+ struct bpf_reg_state *dst_reg)
+{
+ struct bpf_verifier_state *branch;
+ struct bpf_reg_state *regs;
+ bool alu32;
+
+ if (dst_reg->smin_value == -1 && dst_reg->smax_value == 0)
+ alu32 = false;
+ else if (dst_reg->s32_min_value == -1 && dst_reg->s32_max_value == 0)
+ alu32 = true;
+ else
+ return 0;
+
+ branch = push_stack(env, env->insn_idx + 1, env->insn_idx, false);
+ if (IS_ERR(branch))
+ return PTR_ERR(branch);
+
+ regs = branch->frame[branch->curframe]->regs;
+ if (alu32) {
+ __mark_reg32_known(&regs[insn->dst_reg], 0);
+ __mark_reg32_known(dst_reg, -1ull);
+ } else {
+ __mark_reg_known(&regs[insn->dst_reg], 0);
+ __mark_reg_known(dst_reg, -1ull);
+ }
+ return 0;
+}
+
/* WARNING: This function does calculations on 64-bit values, but the actual
* execution may occur on 32-bit values. Therefore, things like bitshifts
* need extra checks in the 32-bit case.
@@ -14251,11 +14276,21 @@ static int adjust_scalar_min_max_vals(struct bpf_verifier_env *env,
scalar_min_max_mul(dst_reg, &src_reg);
break;
case BPF_AND:
+ if (tnum_is_const(src_reg.var_off)) {
+ ret = maybe_fork_scalars(env, insn, dst_reg);
+ if (ret)
+ return ret;
+ }
dst_reg->var_off = tnum_and(dst_reg->var_off, src_reg.var_off);
scalar32_min_max_and(dst_reg, &src_reg);
scalar_min_max_and(dst_reg, &src_reg);
break;
case BPF_OR:
+ if (tnum_is_const(src_reg.var_off)) {
+ ret = maybe_fork_scalars(env, insn, dst_reg);
+ if (ret)
+ return ret;
+ }
dst_reg->var_off = tnum_or(dst_reg->var_off, src_reg.var_off);
scalar32_min_max_or(dst_reg, &src_reg);
scalar_min_max_or(dst_reg, &src_reg);
@@ -15478,6 +15513,7 @@ static void sync_linked_regs(struct bpf_verifier_state *vstate, struct bpf_reg_s
} else {
s32 saved_subreg_def = reg->subreg_def;
s32 saved_off = reg->off;
+ u32 saved_id = reg->id;

fake_reg.type = SCALAR_VALUE;
__mark_reg_known(&fake_reg, (s32)reg->off - (s32)known_reg->off);
@@ -15485,10 +15521,11 @@ static void sync_linked_regs(struct bpf_verifier_state *vstate, struct bpf_reg_s
/* reg = known_reg; reg += delta */
copy_register_state(reg, known_reg);
/*
- * Must preserve off, id and add_const flag,
+ * Must preserve off, id and subreg_def flag,
* otherwise another sync_linked_regs() will be incorrect.
*/
reg->off = saved_off;
+ reg->id = saved_id;
reg->subreg_def = saved_subreg_def;

scalar32_min_max_add(reg, &fake_reg);
diff --git a/kernel/configs/debug.config b/kernel/configs/debug.config
index 509ee703de152..c1b15426fef45 100644
--- a/kernel/configs/debug.config
+++ b/kernel/configs/debug.config
@@ -29,7 +29,6 @@ CONFIG_SECTION_MISMATCH_WARN_ONLY=y
# CONFIG_UBSAN_ALIGNMENT is not set
# CONFIG_UBSAN_DIV_ZERO is not set
# CONFIG_UBSAN_TRAP is not set
-# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_FS_ALLOW_ALL=y
CONFIG_DEBUG_IRQFLAGS=y
diff --git a/kernel/kallsyms.c b/kernel/kallsyms.c
index 9e4bf061bb834..9d3f0c1cca8e2 100644
--- a/kernel/kallsyms.c
+++ b/kernel/kallsyms.c
@@ -389,8 +389,8 @@ static int kallsyms_lookup_buildid(unsigned long addr,
offset, modname, namebuf);

if (!ret)
- ret = ftrace_mod_address_lookup(addr, symbolsize,
- offset, modname, namebuf);
+ ret = ftrace_mod_address_lookup(addr, symbolsize, offset,
+ modname, modbuildid, namebuf);

return ret;
}
diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c
index 3eedb8c226ad8..f852528bdc246 100644
--- a/kernel/kexec_file.c
+++ b/kernel/kexec_file.c
@@ -821,6 +821,60 @@ static int kexec_calculate_store_digests(struct kimage *image)
}

#ifdef CONFIG_ARCH_SUPPORTS_KEXEC_PURGATORY
+/*
+ * kexec_purgatory_find_symbol - find a symbol in the purgatory
+ * @pi: Purgatory to search in.
+ * @name: Name of the symbol.
+ *
+ * Return: pointer to symbol in read-only symtab on success, NULL on error.
+ */
+static const Elf_Sym *kexec_purgatory_find_symbol(struct purgatory_info *pi,
+ const char *name)
+{
+ const Elf_Shdr *sechdrs;
+ const Elf_Ehdr *ehdr;
+ const Elf_Sym *syms;
+ const char *strtab;
+ int i, k;
+
+ if (!pi->ehdr)
+ return NULL;
+
+ ehdr = pi->ehdr;
+ sechdrs = (void *)ehdr + ehdr->e_shoff;
+
+ for (i = 0; i < ehdr->e_shnum; i++) {
+ if (sechdrs[i].sh_type != SHT_SYMTAB)
+ continue;
+
+ if (sechdrs[i].sh_link >= ehdr->e_shnum)
+ /* Invalid strtab section number */
+ continue;
+ strtab = (void *)ehdr + sechdrs[sechdrs[i].sh_link].sh_offset;
+ syms = (void *)ehdr + sechdrs[i].sh_offset;
+
+ /* Go through symbols for a match */
+ for (k = 0; k < sechdrs[i].sh_size/sizeof(Elf_Sym); k++) {
+ if (ELF_ST_BIND(syms[k].st_info) != STB_GLOBAL)
+ continue;
+
+ if (strcmp(strtab + syms[k].st_name, name) != 0)
+ continue;
+
+ if (syms[k].st_shndx == SHN_UNDEF ||
+ syms[k].st_shndx >= ehdr->e_shnum) {
+ pr_debug("Symbol: %s has bad section index %d.\n",
+ name, syms[k].st_shndx);
+ return NULL;
+ }
+
+ /* Found the symbol we are looking for */
+ return &syms[k];
+ }
+ }
+
+ return NULL;
+}
/*
* kexec_purgatory_setup_kbuf - prepare buffer to load purgatory.
* @pi: Purgatory to be loaded.
@@ -899,6 +953,10 @@ static int kexec_purgatory_setup_sechdrs(struct purgatory_info *pi,
unsigned long offset;
size_t sechdrs_size;
Elf_Shdr *sechdrs;
+ const Elf_Sym *entry_sym;
+ u16 entry_shndx = 0;
+ unsigned long entry_off = 0;
+ bool start_fixed = false;
int i;

/*
@@ -916,6 +974,12 @@ static int kexec_purgatory_setup_sechdrs(struct purgatory_info *pi,
bss_addr = kbuf->mem + kbuf->bufsz;
kbuf->image->start = pi->ehdr->e_entry;

+ entry_sym = kexec_purgatory_find_symbol(pi, "purgatory_start");
+ if (entry_sym) {
+ entry_shndx = entry_sym->st_shndx;
+ entry_off = entry_sym->st_value;
+ }
+
for (i = 0; i < pi->ehdr->e_shnum; i++) {
unsigned long align;
void *src, *dst;
@@ -933,6 +997,13 @@ static int kexec_purgatory_setup_sechdrs(struct purgatory_info *pi,

offset = ALIGN(offset, align);

+ if (!start_fixed && entry_sym && i == entry_shndx &&
+ (sechdrs[i].sh_flags & SHF_EXECINSTR) &&
+ entry_off < sechdrs[i].sh_size) {
+ kbuf->image->start = kbuf->mem + offset + entry_off;
+ start_fixed = true;
+ }
+
/*
* Check if the segment contains the entry point, if so,
* calculate the value of image->start based on it.
@@ -943,13 +1014,14 @@ static int kexec_purgatory_setup_sechdrs(struct purgatory_info *pi,
* is not set to the initial value, and warn the user so they
* have a chance to fix their purgatory's linker script.
*/
- if (sechdrs[i].sh_flags & SHF_EXECINSTR &&
+ if (!start_fixed && sechdrs[i].sh_flags & SHF_EXECINSTR &&
pi->ehdr->e_entry >= sechdrs[i].sh_addr &&
pi->ehdr->e_entry < (sechdrs[i].sh_addr
+ sechdrs[i].sh_size) &&
- !WARN_ON(kbuf->image->start != pi->ehdr->e_entry)) {
+ kbuf->image->start == pi->ehdr->e_entry) {
kbuf->image->start -= sechdrs[i].sh_addr;
kbuf->image->start += kbuf->mem + offset;
+ start_fixed = true;
}

src = (void *)pi->ehdr + sechdrs[i].sh_offset;
@@ -1067,61 +1139,6 @@ int kexec_load_purgatory(struct kimage *image, struct kexec_buf *kbuf)
return ret;
}

-/*
- * kexec_purgatory_find_symbol - find a symbol in the purgatory
- * @pi: Purgatory to search in.
- * @name: Name of the symbol.
- *
- * Return: pointer to symbol in read-only symtab on success, NULL on error.
- */
-static const Elf_Sym *kexec_purgatory_find_symbol(struct purgatory_info *pi,
- const char *name)
-{
- const Elf_Shdr *sechdrs;
- const Elf_Ehdr *ehdr;
- const Elf_Sym *syms;
- const char *strtab;
- int i, k;
-
- if (!pi->ehdr)
- return NULL;
-
- ehdr = pi->ehdr;
- sechdrs = (void *)ehdr + ehdr->e_shoff;
-
- for (i = 0; i < ehdr->e_shnum; i++) {
- if (sechdrs[i].sh_type != SHT_SYMTAB)
- continue;
-
- if (sechdrs[i].sh_link >= ehdr->e_shnum)
- /* Invalid strtab section number */
- continue;
- strtab = (void *)ehdr + sechdrs[sechdrs[i].sh_link].sh_offset;
- syms = (void *)ehdr + sechdrs[i].sh_offset;
-
- /* Go through symbols for a match */
- for (k = 0; k < sechdrs[i].sh_size/sizeof(Elf_Sym); k++) {
- if (ELF_ST_BIND(syms[k].st_info) != STB_GLOBAL)
- continue;
-
- if (strcmp(strtab + syms[k].st_name, name) != 0)
- continue;
-
- if (syms[k].st_shndx == SHN_UNDEF ||
- syms[k].st_shndx >= ehdr->e_shnum) {
- pr_debug("Symbol: %s has bad section index %d.\n",
- name, syms[k].st_shndx);
- return NULL;
- }
-
- /* Found the symbol we are looking for */
- return &syms[k];
- }
- }
-
- return NULL;
-}
-
void *kexec_purgatory_get_symbol_addr(struct kimage *image, const char *name)
{
struct purgatory_info *pi = &image->purgatory_info;
diff --git a/kernel/module/kallsyms.c b/kernel/module/kallsyms.c
index bf65e0c3c86fc..30d0798f114f1 100644
--- a/kernel/module/kallsyms.c
+++ b/kernel/module/kallsyms.c
@@ -337,13 +337,8 @@ int module_address_lookup(unsigned long addr,
if (mod) {
if (modname)
*modname = mod->name;
- if (modbuildid) {
-#if IS_ENABLED(CONFIG_STACKTRACE_BUILD_ID)
- *modbuildid = mod->build_id;
-#else
- *modbuildid = NULL;
-#endif
- }
+ if (modbuildid)
+ *modbuildid = module_buildid(mod);

sym = find_kallsyms_symbol(mod, addr, size, offset);

diff --git a/kernel/rcu/tree.h b/kernel/rcu/tree.h
index 8ba04b179416a..08c020e01425d 100644
--- a/kernel/rcu/tree.h
+++ b/kernel/rcu/tree.h
@@ -202,7 +202,7 @@ struct rcu_data {
/* during and after the last grace */
/* period it is aware of. */
struct irq_work defer_qs_iw; /* Obtain later scheduler attention. */
- int defer_qs_iw_pending; /* Scheduler attention pending? */
+ int defer_qs_pending; /* irqwork or softirq pending? */
struct work_struct strict_work; /* Schedule readers for strict GPs. */

/* 2) batch handling */
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 2d865b2096beb..47a44f6dede0c 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -486,8 +486,8 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags)
union rcu_special special;

rdp = this_cpu_ptr(&rcu_data);
- if (rdp->defer_qs_iw_pending == DEFER_QS_PENDING)
- rdp->defer_qs_iw_pending = DEFER_QS_IDLE;
+ if (rdp->defer_qs_pending == DEFER_QS_PENDING)
+ rdp->defer_qs_pending = DEFER_QS_IDLE;

/*
* If RCU core is waiting for this CPU to exit its critical section,
@@ -626,11 +626,10 @@ notrace void rcu_preempt_deferred_qs(struct task_struct *t)
*/
static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
{
- unsigned long flags;
struct rcu_data *rdp;

+ lockdep_assert_irqs_disabled();
rdp = container_of(iwp, struct rcu_data, defer_qs_iw);
- local_irq_save(flags);

/*
* If the IRQ work handler happens to run in the middle of RCU read-side
@@ -646,9 +645,76 @@ static void rcu_preempt_deferred_qs_handler(struct irq_work *iwp)
* 5. Deferred QS reporting does not happen.
*/
if (rcu_preempt_depth() > 0)
- WRITE_ONCE(rdp->defer_qs_iw_pending, DEFER_QS_IDLE);
+ WRITE_ONCE(rdp->defer_qs_pending, DEFER_QS_IDLE);
+}

- local_irq_restore(flags);
+/*
+ * Check if expedited grace period processing during unlock is needed.
+ *
+ * This function determines whether expedited handling is required based on:
+ * 1. Task blocking an expedited grace period (based on a heuristic, could be
+ * false-positive, see below.)
+ * 2. CPU participating in an expedited grace period
+ * 3. Strict grace period mode requiring expedited handling
+ * 4. RCU priority deboosting needs when interrupts were disabled
+ *
+ * @t: The task being checked
+ * @rdp: The per-CPU RCU data
+ * @rnp: The RCU node for this CPU
+ * @irqs_were_disabled: Whether interrupts were disabled before rcu_read_unlock()
+ *
+ * Returns true if expedited processing of the rcu_read_unlock() is needed.
+ */
+static bool rcu_unlock_needs_exp_handling(struct task_struct *t,
+ struct rcu_data *rdp,
+ struct rcu_node *rnp,
+ bool irqs_were_disabled)
+{
+ /*
+ * Check if this task is blocking an expedited grace period. If the
+ * task was preempted within an RCU read-side critical section and is
+ * on the expedited grace period blockers list (exp_tasks), we need
+ * expedited handling to unblock the expedited GP. This is not an exact
+ * check because 't' might not be on the exp_tasks list at all - its
+ * just a fast heuristic that can be false-positive sometimes.
+ */
+ if (t->rcu_blocked_node && READ_ONCE(t->rcu_blocked_node->exp_tasks))
+ return true;
+
+ /*
+ * Check if this CPU is participating in an expedited grace period.
+ * The expmask bitmap tracks which CPUs need to check in for the
+ * current expedited GP. If our CPU's bit is set, we need expedited
+ * handling to help complete the expedited GP.
+ */
+ if (rdp->grpmask & READ_ONCE(rnp->expmask))
+ return true;
+
+ /*
+ * In CONFIG_RCU_STRICT_GRACE_PERIOD=y kernels, all grace periods
+ * are treated as short for testing purposes even if that means
+ * disturbing the system more. Check if either:
+ * - This CPU has not yet reported a quiescent state, or
+ * - This task was preempted within an RCU critical section
+ * In either case, require expedited handling for strict GP mode.
+ */
+ if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) &&
+ ((rdp->grpmask & READ_ONCE(rnp->qsmask)) || t->rcu_blocked_node))
+ return true;
+
+ /*
+ * RCU priority boosting case: If a task is subject to RCU priority
+ * boosting and exits an RCU read-side critical section with interrupts
+ * disabled, we need expedited handling to ensure timely deboosting.
+ * Without this, a low-priority task could incorrectly run at high
+ * real-time priority for an extended period degrading real-time
+ * responsiveness. This applies to all CONFIG_RCU_BOOST=y kernels,
+ * not just to PREEMPT_RT.
+ */
+ if (IS_ENABLED(CONFIG_RCU_BOOST) && irqs_were_disabled && t->rcu_blocked_node)
+ return true;
+
+ return false;
}

/*
@@ -670,22 +736,21 @@ static void rcu_read_unlock_special(struct task_struct *t)
local_irq_save(flags);
irqs_were_disabled = irqs_disabled_flags(flags);
if (preempt_bh_were_disabled || irqs_were_disabled) {
- bool expboost; // Expedited GP in flight or possible boosting.
+ bool needs_exp; // Expedited handling needed.
struct rcu_data *rdp = this_cpu_ptr(&rcu_data);
struct rcu_node *rnp = rdp->mynode;

- expboost = (t->rcu_blocked_node && READ_ONCE(t->rcu_blocked_node->exp_tasks)) ||
- (rdp->grpmask & READ_ONCE(rnp->expmask)) ||
- (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) &&
- ((rdp->grpmask & READ_ONCE(rnp->qsmask)) || t->rcu_blocked_node)) ||
- (IS_ENABLED(CONFIG_RCU_BOOST) && irqs_were_disabled &&
- t->rcu_blocked_node);
+ needs_exp = rcu_unlock_needs_exp_handling(t, rdp, rnp, irqs_were_disabled);
+
// Need to defer quiescent state until everything is enabled.
- if (use_softirq && (in_hardirq() || (expboost && !irqs_were_disabled))) {
+ if (use_softirq && (in_hardirq() || (needs_exp && !irqs_were_disabled))) {
// Using softirq, safe to awaken, and either the
// wakeup is free or there is either an expedited
// GP in flight or a potential need to deboost.
- raise_softirq_irqoff(RCU_SOFTIRQ);
+ if (rdp->defer_qs_pending != DEFER_QS_PENDING) {
+ rdp->defer_qs_pending = DEFER_QS_PENDING;
+ raise_softirq_irqoff(RCU_SOFTIRQ);
+ }
} else {
// Enabling BH or preempt does reschedule, so...
// Also if no expediting and no possible deboosting,
@@ -694,11 +759,11 @@ static void rcu_read_unlock_special(struct task_struct *t)
set_tsk_need_resched(current);
set_preempt_need_resched();
if (IS_ENABLED(CONFIG_IRQ_WORK) && irqs_were_disabled &&
- expboost && rdp->defer_qs_iw_pending != DEFER_QS_PENDING &&
+ needs_exp && rdp->defer_qs_pending != DEFER_QS_PENDING &&
cpu_online(rdp->cpu)) {
// Get scheduler to re-evaluate and call hooks.
// If !IRQ_WORK, FQS scan will eventually IPI.
- rdp->defer_qs_iw_pending = DEFER_QS_PENDING;
+ rdp->defer_qs_pending = DEFER_QS_PENDING;
irq_work_queue_on(&rdp->defer_qs_iw, rdp->cpu);
}
}
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 1689d190dea8f..8acdd97538546 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -3649,6 +3649,9 @@ static void __dl_clear_params(struct sched_dl_entity *dl_se)
dl_se->dl_non_contending = 0;
dl_se->dl_overrun = 0;
dl_se->dl_server = 0;
+ dl_se->dl_defer = 0;
+ dl_se->dl_defer_running = 0;
+ dl_se->dl_defer_armed = 0;

#ifdef CONFIG_RT_MUTEXES
dl_se->pi_se = dl_se;
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 72958b31549fb..7d14e9fa53ac3 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -347,8 +347,8 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
long cpu = (long) ((struct seq_file *) filp->private_data)->private;
struct rq *rq = cpu_rq(cpu);
u64 runtime, period;
+ int retval = 0;
size_t err;
- int retval;
u64 value;

err = kstrtoull_from_user(ubuf, cnt, 10, &value);
@@ -384,8 +384,6 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu
}

retval = dl_server_apply_params(&rq->fair_server, runtime, period, 0);
- if (retval)
- cnt = retval;

if (!runtime)
printk_deferred("Fair server disabled in CPU %d, system may crash due to starvation.\n",
@@ -393,6 +391,9 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu

if (rq->cfs.h_nr_queued)
dl_server_start(&rq->fair_server);
+
+ if (retval < 0)
+ return retval;
}

*ppos += cnt;
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index c437a15026238..ffcce501ed40c 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2151,6 +2151,7 @@ static void push_rt_tasks(struct rq *rq)
*/
static int rto_next_cpu(struct root_domain *rd)
{
+ int this_cpu = smp_processor_id();
int next;
int cpu;

@@ -2174,6 +2175,10 @@ static int rto_next_cpu(struct root_domain *rd)

rd->rto_cpu = cpu;

+ /* Do not send IPI to self */
+ if (cpu == this_cpu)
+ continue;
+
if (cpu < nr_cpu_ids)
return cpu;

diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 184d5c3d89bac..640d2ea4bd1fa 100644
--- a/kernel/time/hrtimer.c
+++ b/kernel/time/hrtimer.c
@@ -1714,7 +1714,7 @@ static void __run_hrtimer(struct hrtimer_cpu_base *cpu_base,

lockdep_assert_held(&cpu_base->lock);

- debug_deactivate(timer);
+ debug_hrtimer_deactivate(timer);
base->running = timer;

/*
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 721c3b221048a..ab277eff80dc2 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -31,7 +31,7 @@ config HAVE_FUNCTION_GRAPH_TRACER
help
See Documentation/trace/ftrace-design.rst

-config HAVE_FUNCTION_GRAPH_RETVAL
+config HAVE_FUNCTION_GRAPH_FREGS
bool

config HAVE_DYNAMIC_FTRACE
@@ -232,7 +232,7 @@ config FUNCTION_GRAPH_TRACER

config FUNCTION_GRAPH_RETVAL
bool "Kernel Function Graph Return Value"
- depends on HAVE_FUNCTION_GRAPH_RETVAL
+ depends on HAVE_FUNCTION_GRAPH_FREGS
depends on FUNCTION_GRAPH_TRACER
default n
help
diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
index 910da0e4531ae..c2d9937158117 100644
--- a/kernel/trace/fgraph.c
+++ b/kernel/trace/fgraph.c
@@ -502,7 +502,11 @@ static struct fgraph_ops fgraph_stub = {
static struct fgraph_ops *fgraph_direct_gops = &fgraph_stub;
DEFINE_STATIC_CALL(fgraph_func, ftrace_graph_entry_stub);
DEFINE_STATIC_CALL(fgraph_retfunc, ftrace_graph_ret_stub);
+#if FGRAPH_NO_DIRECT
+static DEFINE_STATIC_KEY_FALSE(fgraph_do_direct);
+#else
static DEFINE_STATIC_KEY_TRUE(fgraph_do_direct);
+#endif

/**
* ftrace_graph_stop - set to permanently disable function graph tracing
@@ -761,15 +765,12 @@ static struct notifier_block ftrace_suspend_notifier = {
.notifier_call = ftrace_suspend_notifier_call,
};

-/* fgraph_ret_regs is not defined without CONFIG_FUNCTION_GRAPH_RETVAL */
-struct fgraph_ret_regs;
-
/*
* Send the trace to the ring-buffer.
* @return the original return address.
*/
-static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs,
- unsigned long frame_pointer)
+static inline unsigned long
+__ftrace_return_to_handler(struct ftrace_regs *fregs, unsigned long frame_pointer)
{
struct ftrace_ret_stack *ret_stack;
struct ftrace_graph_ret trace;
@@ -789,13 +790,13 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs

trace.rettime = trace_clock_local();
#ifdef CONFIG_FUNCTION_GRAPH_RETVAL
- trace.retval = fgraph_ret_regs_return_value(ret_regs);
+ trace.retval = ftrace_regs_get_return_value(fregs);
#endif

bitmap = get_bitmap_bits(current, offset);

#ifdef CONFIG_HAVE_STATIC_CALL
- if (static_branch_likely(&fgraph_do_direct)) {
+ if (!FGRAPH_NO_DIRECT && static_branch_likely(&fgraph_do_direct)) {
if (test_bit(fgraph_direct_gops->idx, &bitmap))
static_call(fgraph_retfunc)(&trace, fgraph_direct_gops);
} else
@@ -824,14 +825,14 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs
}

/*
- * After all architecures have selected HAVE_FUNCTION_GRAPH_RETVAL, we can
- * leave only ftrace_return_to_handler(ret_regs).
+ * After all architecures have selected HAVE_FUNCTION_GRAPH_FREGS, we can
+ * leave only ftrace_return_to_handler(fregs).
*/
-#ifdef CONFIG_HAVE_FUNCTION_GRAPH_RETVAL
-unsigned long ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs)
+#ifdef CONFIG_HAVE_FUNCTION_GRAPH_FREGS
+unsigned long ftrace_return_to_handler(struct ftrace_regs *fregs)
{
- return __ftrace_return_to_handler(ret_regs,
- fgraph_ret_regs_frame_pointer(ret_regs));
+ return __ftrace_return_to_handler(fregs,
+ ftrace_regs_get_frame_pointer(fregs));
}
#else
unsigned long ftrace_return_to_handler(unsigned long frame_pointer)
@@ -1214,6 +1215,9 @@ static void ftrace_graph_enable_direct(bool enable_branch, struct fgraph_ops *go
trace_func_graph_ret_t retfunc = NULL;
int i;

+ if (FGRAPH_NO_DIRECT)
+ return;
+
if (gops) {
func = gops->entryfunc;
retfunc = gops->retfunc;
@@ -1232,11 +1236,14 @@ static void ftrace_graph_enable_direct(bool enable_branch, struct fgraph_ops *go
static_call_update(fgraph_func, func);
static_call_update(fgraph_retfunc, retfunc);
if (enable_branch)
- static_branch_disable(&fgraph_do_direct);
+ static_branch_enable(&fgraph_do_direct);
}

static void ftrace_graph_disable_direct(bool disable_branch)
{
+ if (FGRAPH_NO_DIRECT)
+ return;
+
if (disable_branch)
static_branch_disable(&fgraph_do_direct);
static_call_update(fgraph_func, ftrace_graph_entry_stub);
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index b2442aabccfd0..c8a6c0bb907ea 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -7566,7 +7566,8 @@ ftrace_func_address_lookup(struct ftrace_mod_map *mod_map,

int
ftrace_mod_address_lookup(unsigned long addr, unsigned long *size,
- unsigned long *off, char **modname, char *sym)
+ unsigned long *off, char **modname,
+ const unsigned char **modbuildid, char *sym)
{
struct ftrace_mod_map *mod_map;
int ret = 0;
@@ -7578,6 +7579,8 @@ ftrace_mod_address_lookup(unsigned long addr, unsigned long *size,
if (ret) {
if (modname)
*modname = mod_map->mod->name;
+ if (modbuildid)
+ *modbuildid = module_buildid(mod_map->mod);
break;
}
}
@@ -7973,7 +7976,7 @@ __ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
void arch_ftrace_ops_list_func(unsigned long ip, unsigned long parent_ip,
struct ftrace_ops *op, struct ftrace_regs *fregs)
{
- kmsan_unpoison_memory(fregs, sizeof(*fregs));
+ kmsan_unpoison_memory(fregs, ftrace_regs_size());
__ftrace_ops_list_func(ip, parent_ip, NULL, fregs);
}
#else
diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 2c42e26ced6b6..9e72543e0099b 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1751,6 +1751,7 @@ static int rb_read_data_buffer(struct buffer_data_page *dpage, int tail, int cpu
struct ring_buffer_event *event;
u64 ts, delta;
int events = 0;
+ int len;
int e;

*delta_ptr = 0;
@@ -1758,9 +1759,12 @@ static int rb_read_data_buffer(struct buffer_data_page *dpage, int tail, int cpu

ts = dpage->time_stamp;

- for (e = 0; e < tail; e += rb_event_length(event)) {
+ for (e = 0; e < tail; e += len) {

event = (struct ring_buffer_event *)(dpage->data + e);
+ len = rb_event_length(event);
+ if (len <= 0 || len > tail - e)
+ return -1;

switch (event->type_len) {

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2f68e6341703c..fb76a7262abf8 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -8672,7 +8672,7 @@ tracing_init_tracefs_percpu(struct trace_array *tr, long cpu)
trace_create_cpu_file("stats", TRACE_MODE_READ, d_cpu,
tr, cpu, &tracing_stats_fops);

- trace_create_cpu_file("buffer_size_kb", TRACE_MODE_READ, d_cpu,
+ trace_create_cpu_file("buffer_size_kb", TRACE_MODE_WRITE, d_cpu,
tr, cpu, &tracing_entries_fops);

if (tr->range_addr_start)
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index a3d7067eae654..38218cfe20e90 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1169,6 +1169,9 @@ static void remove_event_file_dir(struct trace_event_file *file)
free_event_filter(file->filter);
file->flags |= EVENT_FILE_FL_FREED;
event_file_put(file);
+
+ /* Wake up hist poll waiters to notice the EVENT_FILE_FL_FREED flag. */
+ hist_poll_wakeup();
}

/*
@@ -3609,11 +3612,6 @@ void trace_put_event_file(struct trace_event_file *file)
EXPORT_SYMBOL_GPL(trace_put_event_file);

#ifdef CONFIG_DYNAMIC_FTRACE
-
-/* Avoid typos */
-#define ENABLE_EVENT_STR "enable_event"
-#define DISABLE_EVENT_STR "disable_event"
-
struct event_probe_data {
struct trace_event_file *file;
unsigned long count;
diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
index 761d56ed9b8e5..83de1a196a4af 100644
--- a/kernel/trace/trace_events_hist.c
+++ b/kernel/trace/trace_events_hist.c
@@ -5752,7 +5752,7 @@ static __poll_t event_hist_poll(struct file *file, struct poll_table_struct *wai

guard(mutex)(&event_mutex);

- event_file = event_file_data(file);
+ event_file = event_file_file(file);
if (!event_file)
return EPOLLERR;

@@ -5790,7 +5790,7 @@ static int event_hist_open(struct inode *inode, struct file *file)

guard(mutex)(&event_mutex);

- event_file = event_file_data(file);
+ event_file = event_file_file(file);
if (!event_file) {
ret = -ENODEV;
goto err;
@@ -6881,7 +6881,7 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,

remove_hist_vars(hist_data);

- kfree(trigger_data);
+ trigger_data_free(trigger_data);

destroy_hist_data(hist_data);
goto out;
diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c
index 3bd6071441ade..bc437b6ce8969 100644
--- a/kernel/trace/trace_hwlat.c
+++ b/kernel/trace/trace_hwlat.c
@@ -102,9 +102,9 @@ struct hwlat_sample {
/* keep the global state somewhere. */
static struct hwlat_data {

- struct mutex lock; /* protect changes */
+ struct mutex lock; /* protect changes */

- u64 count; /* total since reset */
+ atomic64_t count; /* total since reset */

u64 sample_window; /* total sampling window (on+off) */
u64 sample_width; /* active sampling portion of window */
@@ -195,8 +195,7 @@ void trace_hwlat_callback(bool enter)
* get_sample - sample the CPU TSC and look for likely hardware latencies
*
* Used to repeatedly capture the CPU TSC (or similar), looking for potential
- * hardware-induced latency. Called with interrupts disabled and with
- * hwlat_data.lock held.
+ * hardware-induced latency. Called with interrupts disabled.
*/
static int get_sample(void)
{
@@ -206,6 +205,7 @@ static int get_sample(void)
time_type start, t1, t2, last_t2;
s64 diff, outer_diff, total, last_total = 0;
u64 sample = 0;
+ u64 sample_width = READ_ONCE(hwlat_data.sample_width);
u64 thresh = tracing_thresh;
u64 outer_sample = 0;
int ret = -1;
@@ -269,7 +269,7 @@ static int get_sample(void)
if (diff > sample)
sample = diff; /* only want highest value */

- } while (total <= hwlat_data.sample_width);
+ } while (total <= sample_width);

barrier(); /* finish the above in the view for NMIs */
trace_hwlat_callback_enabled = false;
@@ -287,8 +287,7 @@ static int get_sample(void)
if (kdata->nmi_total_ts)
do_div(kdata->nmi_total_ts, NSEC_PER_USEC);

- hwlat_data.count++;
- s.seqnum = hwlat_data.count;
+ s.seqnum = atomic64_inc_return(&hwlat_data.count);
s.duration = sample;
s.outer_duration = outer_sample;
s.nmi_total_ts = kdata->nmi_total_ts;
@@ -837,7 +836,7 @@ static int hwlat_tracer_init(struct trace_array *tr)

hwlat_trace = tr;

- hwlat_data.count = 0;
+ atomic64_set(&hwlat_data.count, 0);
tr->max_latency = 0;
save_tracing_thresh = tracing_thresh;

diff --git a/kernel/ucount.c b/kernel/ucount.c
index 78f4c4255358f..8340f767c1aea 100644
--- a/kernel/ucount.c
+++ b/kernel/ucount.c
@@ -45,7 +45,7 @@ static int set_permissions(struct ctl_table_header *head,
int mode;

/* Allow users with CAP_SYS_RESOURCE unrestrained access */
- if (ns_capable(user_ns, CAP_SYS_RESOURCE))
+ if (ns_capable_noaudit(user_ns, CAP_SYS_RESOURCE))
mode = (table->mode & S_IRWXU) >> 6;
else
/* Allow all others at most read-only access */
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 8fbb4385e8149..70cb35d5420d8 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -460,7 +460,7 @@ static bool need_counting_irqs(void)
u8 util;
int tail = __this_cpu_read(cpustat_tail);

- tail = (tail + NUM_HARDIRQ_REPORT - 1) % NUM_HARDIRQ_REPORT;
+ tail = (tail + NUM_SAMPLE_PERIODS - 1) % NUM_SAMPLE_PERIODS;
util = __this_cpu_read(cpustat_util[tail][STATS_HARDIRQ]);
return util > HARDIRQ_PERCENT_THRESH;
}
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 3c87eb98609c0..9f7f7244bdc8e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -117,6 +117,8 @@ enum wq_internal_consts {
MAYDAY_INTERVAL = HZ / 10, /* and then every 100ms */
CREATE_COOLDOWN = HZ, /* time to breath after fail */

+ RESCUER_BATCH = 16, /* process items per turn */
+
/*
* Rescue workers are used only on emergencies and shared by
* all cpus. Give MIN_NICE.
@@ -284,6 +286,7 @@ struct pool_workqueue {
struct list_head pending_node; /* LN: node on wq_node_nr_active->pending_pwqs */
struct list_head pwqs_node; /* WR: node on wq->pwqs */
struct list_head mayday_node; /* MD: node on wq->maydays */
+ struct work_struct mayday_cursor; /* L: cursor on pool->worklist */

u64 stats[PWQ_NR_STATS];

@@ -1120,6 +1123,12 @@ static struct worker *find_worker_executing_work(struct worker_pool *pool,
return NULL;
}

+static void mayday_cursor_func(struct work_struct *work)
+{
+ /* should not be processed, only for marking position */
+ BUG();
+}
+
/**
* move_linked_works - move linked works to a list
* @work: start of series of works to be scheduled
@@ -1182,6 +1191,16 @@ static bool assign_work(struct work_struct *work, struct worker *worker,

lockdep_assert_held(&pool->lock);

+ /* The cursor work should not be processed */
+ if (unlikely(work->func == mayday_cursor_func)) {
+ /* only worker_thread() can possibly take this branch */
+ WARN_ON_ONCE(worker->rescue_wq);
+ if (nextp)
+ *nextp = list_next_entry(work, entry);
+ list_del_init(&work->entry);
+ return false;
+ }
+
/*
* A single work shouldn't be executed concurrently by multiple workers.
* __queue_work() ensures that @work doesn't jump to a different pool
@@ -3407,6 +3426,35 @@ static int worker_thread(void *__worker)
goto woke_up;
}

+static bool assign_rescuer_work(struct pool_workqueue *pwq, struct worker *rescuer)
+{
+ struct worker_pool *pool = pwq->pool;
+ struct work_struct *cursor = &pwq->mayday_cursor;
+ struct work_struct *work, *n;
+
+ /* need rescue? */
+ if (!pwq->nr_active || !need_to_create_worker(pool))
+ return false;
+
+ /* search from the start or cursor if available */
+ if (list_empty(&cursor->entry))
+ work = list_first_entry(&pool->worklist, struct work_struct, entry);
+ else
+ work = list_next_entry(cursor, entry);
+
+ /* find the next work item to rescue */
+ list_for_each_entry_safe_from(work, n, &pool->worklist, entry) {
+ if (get_work_pwq(work) == pwq && assign_work(work, rescuer, &n)) {
+ pwq->stats[PWQ_STAT_RESCUED]++;
+ /* put the cursor for next search */
+ list_move_tail(&cursor->entry, &n->entry);
+ return true;
+ }
+ }
+
+ return false;
+}
+
/**
* rescuer_thread - the rescuer thread function
* @__rescuer: self
@@ -3461,7 +3509,7 @@ static int rescuer_thread(void *__rescuer)
struct pool_workqueue *pwq = list_first_entry(&wq->maydays,
struct pool_workqueue, mayday_node);
struct worker_pool *pool = pwq->pool;
- struct work_struct *work, *n;
+ unsigned int count = 0;

__set_current_state(TASK_RUNNING);
list_del_init(&pwq->mayday_node);
@@ -3472,30 +3520,18 @@ static int rescuer_thread(void *__rescuer)

raw_spin_lock_irq(&pool->lock);

- /*
- * Slurp in all works issued via this workqueue and
- * process'em.
- */
WARN_ON_ONCE(!list_empty(&rescuer->scheduled));
- list_for_each_entry_safe(work, n, &pool->worklist, entry) {
- if (get_work_pwq(work) == pwq &&
- assign_work(work, rescuer, &n))
- pwq->stats[PWQ_STAT_RESCUED]++;
- }

- if (!list_empty(&rescuer->scheduled)) {
+ while (assign_rescuer_work(pwq, rescuer)) {
process_scheduled_works(rescuer);

/*
- * The above execution of rescued work items could
- * have created more to rescue through
- * pwq_activate_first_inactive() or chained
- * queueing. Let's put @pwq back on mayday list so
- * that such back-to-back work items, which may be
- * being used to relieve memory pressure, don't
- * incur MAYDAY_INTERVAL delay inbetween.
+ * If the per-turn work item limit is reached and other
+ * PWQs are in mayday, requeue mayday for this PWQ and
+ * let the rescuer handle the other PWQs first.
*/
- if (pwq->nr_active && need_to_create_worker(pool)) {
+ if (++count > RESCUER_BATCH && !list_empty(&pwq->wq->maydays) &&
+ pwq->nr_active && need_to_create_worker(pool)) {
raw_spin_lock(&wq_mayday_lock);
/*
* Queue iff we aren't racing destruction
@@ -3506,9 +3542,14 @@ static int rescuer_thread(void *__rescuer)
list_add_tail(&pwq->mayday_node, &wq->maydays);
}
raw_spin_unlock(&wq_mayday_lock);
+ break;
}
}

+ /* The cursor can not be left behind without the rescuer watching it. */
+ if (!list_empty(&pwq->mayday_cursor.entry) && list_empty(&pwq->mayday_node))
+ list_del_init(&pwq->mayday_cursor.entry);
+
/*
* Leave this pool. Notify regular workers; otherwise, we end up
* with 0 concurrency and stalling the execution.
@@ -5108,6 +5149,19 @@ static void init_pwq(struct pool_workqueue *pwq, struct workqueue_struct *wq,
INIT_LIST_HEAD(&pwq->pwqs_node);
INIT_LIST_HEAD(&pwq->mayday_node);
kthread_init_work(&pwq->release_work, pwq_release_workfn);
+
+ /*
+ * Set the dummy cursor work with valid function and get_work_pwq().
+ *
+ * The cursor work should only be in the pwq->pool->worklist, and
+ * should not be treated as a processable work item.
+ *
+ * WORK_STRUCT_PENDING and WORK_STRUCT_INACTIVE just make it less
+ * surprise for kernel debugging tools and reviewers.
+ */
+ INIT_WORK(&pwq->mayday_cursor, mayday_cursor_func);
+ atomic_long_set(&pwq->mayday_cursor.data, (unsigned long)pwq |
+ WORK_STRUCT_PENDING | WORK_STRUCT_PWQ | WORK_STRUCT_INACTIVE);
}

/* sync @pwq with the current state of its associated wq and link it */
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b1d7c427bbe3d..ffa3da3583537 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1677,33 +1677,6 @@ config STACKTRACE
It is also used by various kernel debugging features that require
stack trace generation.

-config WARN_ALL_UNSEEDED_RANDOM
- bool "Warn for all uses of unseeded randomness"
- default n
- help
- Some parts of the kernel contain bugs relating to their use of
- cryptographically secure random numbers before it's actually possible
- to generate those numbers securely. This setting ensures that these
- flaws don't go unnoticed, by enabling a message, should this ever
- occur. This will allow people with obscure setups to know when things
- are going wrong, so that they might contact developers about fixing
- it.
-
- Unfortunately, on some models of some architectures getting
- a fully seeded CRNG is extremely difficult, and so this can
- result in dmesg getting spammed for a surprisingly long
- time. This is really bad from a security perspective, and
- so architecture maintainers really need to do what they can
- to get the CRNG seeded sooner after the system is booted.
- However, since users cannot do anything actionable to
- address this, by default this option is disabled.
-
- Say Y here if you want to receive warnings for all uses of
- unseeded randomness. This will be of use primarily for
- those developers interested in improving the security of
- Linux kernels running on their architecture (or
- subarchitecture).
-
config DEBUG_KOBJECT
bool "kobject debugging"
depends on DEBUG_KERNEL
diff --git a/lib/objpool.c b/lib/objpool.c
index b998b720c7329..d98fadf1de169 100644
--- a/lib/objpool.c
+++ b/lib/objpool.c
@@ -142,7 +142,7 @@ int objpool_init(struct objpool_head *pool, int nr_objs, int object_size,
pool->gfp = gfp & ~__GFP_ZERO;
pool->context = context;
pool->release = release;
- slot_size = nr_cpu_ids * sizeof(struct objpool_slot);
+ slot_size = nr_cpu_ids * sizeof(struct objpool_slot *);
pool->cpu_slots = kzalloc(slot_size, pool->gfp);
if (!pool->cpu_slots)
return -ENOMEM;
diff --git a/mm/highmem.c b/mm/highmem.c
index ef3189b36cadb..baade043c2148 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -180,12 +180,13 @@ struct page *__kmap_to_page(void *vaddr)
for (i = 0; i < kctrl->idx; i++) {
unsigned long base_addr;
int idx;
+ pte_t pteval = kctrl->pteval[i];

idx = arch_kmap_local_map_idx(i, pte_pfn(pteval));
base_addr = __fix_to_virt(FIX_KMAP_BEGIN + idx);

if (base_addr == base)
- return pte_page(kctrl->pteval[i]);
+ return pte_page(pteval);
}
}

diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
index c447add277ccd..108736d6b3aea 100644
--- a/mm/numa_memblks.c
+++ b/mm/numa_memblks.c
@@ -548,15 +548,16 @@ static int meminfo_to_nid(struct numa_meminfo *mi, u64 start)
int phys_to_target_node(u64 start)
{
int nid = meminfo_to_nid(&numa_meminfo, start);
+ int reserved_nid = meminfo_to_nid(&numa_reserved_meminfo, start);

/*
- * Prefer online nodes, but if reserved memory might be
- * hot-added continue the search with reserved ranges.
+ * Prefer online nodes unless the address is also described
+ * by reserved ranges, in which case use the reserved nid.
*/
- if (nid != NUMA_NO_NODE)
+ if (nid != NUMA_NO_NODE && reserved_nid == NUMA_NO_NODE)
return nid;

- return meminfo_to_nid(&numa_reserved_meminfo, start);
+ return reserved_nid;
}
EXPORT_SYMBOL_GPL(phys_to_target_node);

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f30a4ab8f254c..4282c9d0a5ddd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4400,6 +4400,20 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
compact_result == COMPACT_DEFERRED)
goto nopage;

+ /*
+ * THP page faults may attempt local node only first,
+ * but are then allowed to only compact, not reclaim,
+ * see alloc_pages_mpol().
+ *
+ * Compaction can fail for other reasons than those
+ * checked above and we don't want such THP allocations
+ * to put reclaim pressure on a single node in a
+ * situation where other nodes might have plenty of
+ * available memory.
+ */
+ if (gfp_mask & __GFP_THISNODE)
+ goto nopage;
+
/*
* Looks like reclaim/compaction is worth trying, but
* sync compaction could be very expensive, so keep
diff --git a/mm/slub.c b/mm/slub.c
index 30c544a0709e8..a6ed12d3e0449 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -757,7 +757,7 @@ static inline bool slab_update_freelist(struct kmem_cache *s, struct slab *slab,
* request size in the meta data area, for better debug and sanity check.
*/
static inline void set_orig_size(struct kmem_cache *s,
- void *object, unsigned int orig_size)
+ void *object, unsigned long orig_size)
{
void *p = kasan_reset_tag(object);
unsigned int kasan_meta_size;
@@ -778,10 +778,10 @@ static inline void set_orig_size(struct kmem_cache *s,
p += get_info_end(s);
p += sizeof(struct track) * 2;

- *(unsigned int *)p = orig_size;
+ *(unsigned long *)p = orig_size;
}

-static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
+static inline unsigned long get_orig_size(struct kmem_cache *s, void *object)
{
void *p = kasan_reset_tag(object);

@@ -791,7 +791,7 @@ static inline unsigned int get_orig_size(struct kmem_cache *s, void *object)
p += get_info_end(s);
p += sizeof(struct track) * 2;

- return *(unsigned int *)p;
+ return *(unsigned long *)p;
}

#ifdef CONFIG_SLUB_DEBUG
@@ -1096,7 +1096,7 @@ static void print_trailer(struct kmem_cache *s, struct slab *slab, u8 *p)
off += 2 * sizeof(struct track);

if (slub_debug_orig_size(s))
- off += sizeof(unsigned int);
+ off += sizeof(unsigned long);

off += kasan_metadata_size(s, false);

@@ -1292,7 +1292,7 @@ static int check_pad_bytes(struct kmem_cache *s, struct slab *slab, u8 *p)
off += 2 * sizeof(struct track);

if (s->flags & SLAB_KMALLOC)
- off += sizeof(unsigned int);
+ off += sizeof(unsigned long);
}

off += kasan_metadata_size(s, false);
@@ -5441,7 +5441,7 @@ static int calculate_sizes(struct kmem_cache_args *args, struct kmem_cache *s)

/* Save the original kmalloc request size */
if (flags & SLAB_KMALLOC)
- size += sizeof(unsigned int);
+ size += sizeof(unsigned long);
}
#endif

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 26e5ab2edaea3..1d2262fb54185 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2190,11 +2190,14 @@ decay_va_pool_node(struct vmap_node *vn, bool full_decay)
reclaim_list_global(&decay_list);
}

+#define KASAN_RELEASE_BATCH_SIZE 32
+
static void
kasan_release_vmalloc_node(struct vmap_node *vn)
{
struct vmap_area *va;
unsigned long start, end;
+ unsigned int batch_count = 0;

start = list_first_entry(&vn->purge_list, struct vmap_area, list)->va_start;
end = list_last_entry(&vn->purge_list, struct vmap_area, list)->va_end;
@@ -2204,6 +2207,11 @@ kasan_release_vmalloc_node(struct vmap_node *vn)
kasan_release_vmalloc(va->va_start, va->va_end,
va->va_start, va->va_end,
KASAN_VMALLOC_PAGE_RANGE);
+
+ if (need_resched() || (++batch_count >= KASAN_RELEASE_BATCH_SIZE)) {
+ cond_resched();
+ batch_count = 0;
+ }
}

kasan_release_vmalloc(start, end, start, end, KASAN_VMALLOC_TLB_FLUSH);
diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index b9ff69c7522a1..068d57515dd58 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -274,45 +274,52 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
{
int i, j;

- write_lock(&xen_9pfs_lock);
- list_del(&priv->list);
- write_unlock(&xen_9pfs_lock);
-
- for (i = 0; i < XEN_9PFS_NUM_RINGS; i++) {
- struct xen_9pfs_dataring *ring = &priv->rings[i];
-
- cancel_work_sync(&ring->work);
-
- if (!priv->rings[i].intf)
- break;
- if (priv->rings[i].irq > 0)
- unbind_from_irqhandler(priv->rings[i].irq, ring);
- if (priv->rings[i].data.in) {
- for (j = 0;
- j < (1 << priv->rings[i].intf->ring_order);
- j++) {
- grant_ref_t ref;
-
- ref = priv->rings[i].intf->ref[j];
- gnttab_end_foreign_access(ref, NULL);
- }
- free_pages_exact(priv->rings[i].data.in,
+ if (priv->rings) {
+ for (i = 0; i < XEN_9PFS_NUM_RINGS; i++) {
+ struct xen_9pfs_dataring *ring = &priv->rings[i];
+
+ cancel_work_sync(&ring->work);
+
+ if (!priv->rings[i].intf)
+ break;
+ if (priv->rings[i].irq > 0)
+ unbind_from_irqhandler(priv->rings[i].irq, ring);
+ if (priv->rings[i].data.in) {
+ for (j = 0;
+ j < (1 << priv->rings[i].intf->ring_order);
+ j++) {
+ grant_ref_t ref;
+
+ ref = priv->rings[i].intf->ref[j];
+ gnttab_end_foreign_access(ref, NULL);
+ }
+ free_pages_exact(priv->rings[i].data.in,
1UL << (priv->rings[i].intf->ring_order +
XEN_PAGE_SHIFT));
+ }
+ gnttab_end_foreign_access(priv->rings[i].ref, NULL);
+ free_page((unsigned long)priv->rings[i].intf);
}
- gnttab_end_foreign_access(priv->rings[i].ref, NULL);
- free_page((unsigned long)priv->rings[i].intf);
+ kfree(priv->rings);
}
- kfree(priv->rings);
kfree(priv->tag);
kfree(priv);
}

static void xen_9pfs_front_remove(struct xenbus_device *dev)
{
- struct xen_9pfs_front_priv *priv = dev_get_drvdata(&dev->dev);
+ struct xen_9pfs_front_priv *priv;

+ write_lock(&xen_9pfs_lock);
+ priv = dev_get_drvdata(&dev->dev);
+ if (priv == NULL) {
+ write_unlock(&xen_9pfs_lock);
+ return;
+ }
dev_set_drvdata(&dev->dev, NULL);
+ list_del(&priv->list);
+ write_unlock(&xen_9pfs_lock);
+
xen_9pfs_front_free(priv);
}

@@ -379,7 +386,7 @@ static int xen_9pfs_front_init(struct xenbus_device *dev)
{
int ret, i;
struct xenbus_transaction xbt;
- struct xen_9pfs_front_priv *priv = dev_get_drvdata(&dev->dev);
+ struct xen_9pfs_front_priv *priv;
char *versions, *v;
unsigned int max_rings, max_ring_order, len = 0;

@@ -407,6 +414,10 @@ static int xen_9pfs_front_init(struct xenbus_device *dev)
if (p9_xen_trans.maxsize > XEN_FLEX_RING_SIZE(max_ring_order))
p9_xen_trans.maxsize = XEN_FLEX_RING_SIZE(max_ring_order) / 2;

+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+ priv->dev = dev;
priv->rings = kcalloc(XEN_9PFS_NUM_RINGS, sizeof(*priv->rings),
GFP_KERNEL);
if (!priv->rings) {
@@ -465,6 +476,11 @@ static int xen_9pfs_front_init(struct xenbus_device *dev)
goto error;
}

+ write_lock(&xen_9pfs_lock);
+ dev_set_drvdata(&dev->dev, priv);
+ list_add_tail(&priv->list, &xen_9pfs_devs);
+ write_unlock(&xen_9pfs_lock);
+
xenbus_switch_state(dev, XenbusStateInitialised);
return 0;

@@ -479,19 +495,6 @@ static int xen_9pfs_front_init(struct xenbus_device *dev)
static int xen_9pfs_front_probe(struct xenbus_device *dev,
const struct xenbus_device_id *id)
{
- struct xen_9pfs_front_priv *priv = NULL;
-
- priv = kzalloc(sizeof(*priv), GFP_KERNEL);
- if (!priv)
- return -ENOMEM;
-
- priv->dev = dev;
- dev_set_drvdata(&dev->dev, priv);
-
- write_lock(&xen_9pfs_lock);
- list_add_tail(&priv->list, &xen_9pfs_devs);
- write_unlock(&xen_9pfs_lock);
-
return 0;
}

diff --git a/net/atm/signaling.c b/net/atm/signaling.c
index e70ae2c113f95..358fbe5e4d1d0 100644
--- a/net/atm/signaling.c
+++ b/net/atm/signaling.c
@@ -22,6 +22,36 @@

struct atm_vcc *sigd = NULL;

+/*
+ * find_get_vcc - validate and get a reference to a vcc pointer
+ * @vcc: the vcc pointer to validate
+ *
+ * This function validates that @vcc points to a registered VCC in vcc_hash.
+ * If found, it increments the socket reference count and returns the vcc.
+ * The caller must call sock_put(sk_atm(vcc)) when done.
+ *
+ * Returns the vcc pointer if valid, NULL otherwise.
+ */
+static struct atm_vcc *find_get_vcc(struct atm_vcc *vcc)
+{
+ int i;
+
+ read_lock(&vcc_sklist_lock);
+ for (i = 0; i < VCC_HTABLE_SIZE; i++) {
+ struct sock *s;
+
+ sk_for_each(s, &vcc_hash[i]) {
+ if (atm_sk(s) == vcc) {
+ sock_hold(s);
+ read_unlock(&vcc_sklist_lock);
+ return vcc;
+ }
+ }
+ }
+ read_unlock(&vcc_sklist_lock);
+ return NULL;
+}
+
static void sigd_put_skb(struct sk_buff *skb)
{
if (!sigd) {
@@ -69,7 +99,14 @@ static int sigd_send(struct atm_vcc *vcc, struct sk_buff *skb)

msg = (struct atmsvc_msg *) skb->data;
WARN_ON(refcount_sub_and_test(skb->truesize, &sk_atm(vcc)->sk_wmem_alloc));
- vcc = *(struct atm_vcc **) &msg->vcc;
+
+ vcc = find_get_vcc(*(struct atm_vcc **)&msg->vcc);
+ if (!vcc) {
+ pr_debug("invalid vcc pointer in msg\n");
+ dev_kfree_skb(skb);
+ return -EINVAL;
+ }
+
pr_debug("%d (0x%lx)\n", (int)msg->type, (unsigned long)vcc);
sk = sk_atm(vcc);

@@ -100,7 +137,16 @@ static int sigd_send(struct atm_vcc *vcc, struct sk_buff *skb)
clear_bit(ATM_VF_WAITING, &vcc->flags);
break;
case as_indicate:
- vcc = *(struct atm_vcc **)&msg->listen_vcc;
+ /* Release the reference from msg->vcc, we'll use msg->listen_vcc instead */
+ sock_put(sk);
+
+ vcc = find_get_vcc(*(struct atm_vcc **)&msg->listen_vcc);
+ if (!vcc) {
+ pr_debug("invalid listen_vcc pointer in msg\n");
+ dev_kfree_skb(skb);
+ return -EINVAL;
+ }
+
sk = sk_atm(vcc);
pr_debug("as_indicate!!!\n");
lock_sock(sk);
@@ -115,6 +161,8 @@ static int sigd_send(struct atm_vcc *vcc, struct sk_buff *skb)
sk->sk_state_change(sk);
as_indicate_complete:
release_sock(sk);
+ /* Paired with find_get_vcc(msg->listen_vcc) above */
+ sock_put(sk);
return 0;
case as_close:
set_bit(ATM_VF_RELEASED, &vcc->flags);
@@ -131,11 +179,15 @@ static int sigd_send(struct atm_vcc *vcc, struct sk_buff *skb)
break;
default:
pr_alert("bad message type %d\n", (int)msg->type);
+ /* Paired with find_get_vcc(msg->vcc) above */
+ sock_put(sk);
return -EINVAL;
}
sk->sk_state_change(sk);
out:
dev_kfree_skb(skb);
+ /* Paired with find_get_vcc(msg->vcc) above */
+ sock_put(sk);
return 0;
}

diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
index dad9020474149..fa74fac5af778 100644
--- a/net/bluetooth/hci_conn.c
+++ b/net/bluetooth/hci_conn.c
@@ -967,6 +967,7 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
switch (type) {
case ACL_LINK:
conn->pkt_type = hdev->pkt_type & ACL_PTYPE_MASK;
+ conn->link_policy = hdev->link_policy;
conn->mtu = hdev->acl_mtu;
break;
case LE_LINK:
@@ -2511,8 +2512,8 @@ void hci_conn_enter_active_mode(struct hci_conn *conn, __u8 force_active)

timer:
if (hdev->idle_timeout > 0)
- queue_delayed_work(hdev->workqueue, &conn->idle_work,
- msecs_to_jiffies(hdev->idle_timeout));
+ mod_delayed_work(hdev->workqueue, &conn->idle_work,
+ msecs_to_jiffies(hdev->idle_timeout));
}

/* Drop all connection on the device */
diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
index f79b38603205c..00de90fee44a7 100644
--- a/net/bluetooth/hci_sync.c
+++ b/net/bluetooth/hci_sync.c
@@ -6853,8 +6853,6 @@ static int hci_acl_create_conn_sync(struct hci_dev *hdev, void *data)

conn->attempt++;

- conn->link_policy = hdev->link_policy;
-
memset(&cp, 0, sizeof(cp));
bacpy(&cp.bdaddr, &conn->dst);
cp.pscan_rep_mode = 0x02;
diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
index 41197f9fdf980..61fd5a01fbcac 100644
--- a/net/bluetooth/l2cap_core.c
+++ b/net/bluetooth/l2cap_core.c
@@ -4865,6 +4865,13 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
goto response_unlock;
}

+ /* Check if Key Size is sufficient for the security level */
+ if (!l2cap_check_enc_key_size(conn->hcon, pchan)) {
+ result = L2CAP_CR_LE_BAD_KEY_SIZE;
+ chan = NULL;
+ goto response_unlock;
+ }
+
/* Check for valid dynamic CID range */
if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) {
result = L2CAP_CR_LE_INVALID_SCID;
@@ -5000,13 +5007,15 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
struct l2cap_chan *chan, *pchan;
u16 mtu, mps;
__le16 psm;
- u8 result, len = 0;
+ u8 result, rsp_len = 0;
int i, num_scid;
bool defer = false;

if (!enable_ecred)
return -EINVAL;

+ memset(pdu, 0, sizeof(*pdu));
+
if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) {
result = L2CAP_CR_LE_INVALID_PARAMS;
goto response;
@@ -5015,6 +5024,9 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
cmd_len -= sizeof(*req);
num_scid = cmd_len / sizeof(u16);

+ /* Always respond with the same number of scids as in the request */
+ rsp_len = cmd_len;
+
if (num_scid > L2CAP_ECRED_MAX_CID) {
result = L2CAP_CR_LE_INVALID_PARAMS;
goto response;
@@ -5024,7 +5036,7 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
mps = __le16_to_cpu(req->mps);

if (mtu < L2CAP_ECRED_MIN_MTU || mps < L2CAP_ECRED_MIN_MPS) {
- result = L2CAP_CR_LE_UNACCEPT_PARAMS;
+ result = L2CAP_CR_LE_INVALID_PARAMS;
goto response;
}

@@ -5044,8 +5056,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,

BT_DBG("psm 0x%2.2x mtu %u mps %u", __le16_to_cpu(psm), mtu, mps);

- memset(pdu, 0, sizeof(*pdu));
-
/* Check if we have socket listening on psm */
pchan = l2cap_global_chan_by_psm(BT_LISTEN, psm, &conn->hcon->src,
&conn->hcon->dst, LE_LINK);
@@ -5058,7 +5068,16 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,

if (!smp_sufficient_security(conn->hcon, pchan->sec_level,
SMP_ALLOW_STK)) {
- result = L2CAP_CR_LE_AUTHENTICATION;
+ result = pchan->sec_level == BT_SECURITY_MEDIUM ?
+ L2CAP_CR_LE_ENCRYPTION : L2CAP_CR_LE_AUTHENTICATION;
+ goto unlock;
+ }
+
+ /* Check if the listening channel has set an output MTU then the
+ * requested MTU shall be less than or equal to that value.
+ */
+ if (pchan->omtu && mtu < pchan->omtu) {
+ result = L2CAP_CR_LE_UNACCEPT_PARAMS;
goto unlock;
}

@@ -5070,7 +5089,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
BT_DBG("scid[%d] 0x%4.4x", i, scid);

pdu->dcid[i] = 0x0000;
- len += sizeof(*pdu->dcid);

/* Check for valid dynamic CID range */
if (scid < L2CAP_CID_DYN_START || scid > L2CAP_CID_LE_DYN_END) {
@@ -5137,7 +5155,7 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
return 0;

l2cap_send_cmd(conn, cmd->ident, L2CAP_ECRED_CONN_RSP,
- sizeof(*pdu) + len, pdu);
+ sizeof(*pdu) + rsp_len, pdu);

return 0;
}
@@ -5259,14 +5277,14 @@ static inline int l2cap_ecred_reconf_req(struct l2cap_conn *conn,
struct l2cap_ecred_reconf_req *req = (void *) data;
struct l2cap_ecred_reconf_rsp rsp;
u16 mtu, mps, result;
- struct l2cap_chan *chan;
+ struct l2cap_chan *chan[L2CAP_ECRED_MAX_CID] = {};
int i, num_scid;

if (!enable_ecred)
return -EINVAL;

- if (cmd_len < sizeof(*req) || cmd_len - sizeof(*req) % sizeof(u16)) {
- result = L2CAP_CR_LE_INVALID_PARAMS;
+ if (cmd_len < sizeof(*req) || (cmd_len - sizeof(*req)) % sizeof(u16)) {
+ result = L2CAP_RECONF_INVALID_CID;
goto respond;
}

@@ -5276,42 +5294,69 @@ static inline int l2cap_ecred_reconf_req(struct l2cap_conn *conn,
BT_DBG("mtu %u mps %u", mtu, mps);

if (mtu < L2CAP_ECRED_MIN_MTU) {
- result = L2CAP_RECONF_INVALID_MTU;
+ result = L2CAP_RECONF_INVALID_PARAMS;
goto respond;
}

if (mps < L2CAP_ECRED_MIN_MPS) {
- result = L2CAP_RECONF_INVALID_MPS;
+ result = L2CAP_RECONF_INVALID_PARAMS;
goto respond;
}

cmd_len -= sizeof(*req);
num_scid = cmd_len / sizeof(u16);
+
+ if (num_scid > L2CAP_ECRED_MAX_CID) {
+ result = L2CAP_RECONF_INVALID_PARAMS;
+ goto respond;
+ }
+
result = L2CAP_RECONF_SUCCESS;

+ /* Check if each SCID, MTU and MPS are valid */
for (i = 0; i < num_scid; i++) {
u16 scid;

scid = __le16_to_cpu(req->scid[i]);
- if (!scid)
- return -EPROTO;
+ if (!scid) {
+ result = L2CAP_RECONF_INVALID_CID;
+ goto respond;
+ }

- chan = __l2cap_get_chan_by_dcid(conn, scid);
- if (!chan)
- continue;
+ chan[i] = __l2cap_get_chan_by_dcid(conn, scid);
+ if (!chan[i]) {
+ result = L2CAP_RECONF_INVALID_CID;
+ goto respond;
+ }

- /* If the MTU value is decreased for any of the included
- * channels, then the receiver shall disconnect all
- * included channels.
+ /* The MTU field shall be greater than or equal to the greatest
+ * current MTU size of these channels.
*/
- if (chan->omtu > mtu) {
- BT_ERR("chan %p decreased MTU %u -> %u", chan,
- chan->omtu, mtu);
+ if (chan[i]->omtu > mtu) {
+ BT_ERR("chan %p decreased MTU %u -> %u", chan[i],
+ chan[i]->omtu, mtu);
result = L2CAP_RECONF_INVALID_MTU;
+ goto respond;
}

- chan->omtu = mtu;
- chan->remote_mps = mps;
+ /* If more than one channel is being configured, the MPS field
+ * shall be greater than or equal to the current MPS size of
+ * each of these channels. If only one channel is being
+ * configured, the MPS field may be less than the current MPS
+ * of that channel.
+ */
+ if (chan[i]->remote_mps >= mps && i) {
+ BT_ERR("chan %p decreased MPS %u -> %u", chan[i],
+ chan[i]->remote_mps, mps);
+ result = L2CAP_RECONF_INVALID_MPS;
+ goto respond;
+ }
+ }
+
+ /* Commit the new MTU and MPS values after checking they are valid */
+ for (i = 0; i < num_scid; i++) {
+ chan[i]->omtu = mtu;
+ chan[i]->remote_mps = mps;
}

respond:
diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
index b35f1551b9be7..ef1b8a44420ed 100644
--- a/net/bluetooth/l2cap_sock.c
+++ b/net/bluetooth/l2cap_sock.c
@@ -1029,10 +1029,17 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
break;
}

- /* Setting is not supported as it's the remote side that
- * decides this.
- */
- err = -EPERM;
+ /* Only allow setting output MTU when not connected */
+ if (sk->sk_state == BT_CONNECTED) {
+ err = -EISCONN;
+ break;
+ }
+
+ err = copy_safe_from_sockptr(&mtu, sizeof(mtu), optval, optlen);
+ if (err)
+ break;
+
+ chan->omtu = mtu;
break;

case BT_RCVMTU:
diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
index 4227894e35792..9bd2914006df7 100644
--- a/net/bridge/br_multicast.c
+++ b/net/bridge/br_multicast.c
@@ -244,14 +244,11 @@ br_multicast_port_vid_to_port_ctx(struct net_bridge_port *port, u16 vid)

lockdep_assert_held_once(&port->br->multicast_lock);

- if (!br_opt_get(port->br, BROPT_MCAST_VLAN_SNOOPING_ENABLED))
- return NULL;
-
/* Take RCU to access the vlan. */
rcu_read_lock();

vlan = br_vlan_find(nbp_vlan_group_rcu(port), vid);
- if (vlan && !br_multicast_port_ctx_vlan_disabled(&vlan->port_mcast_ctx))
+ if (vlan)
pmctx = &vlan->port_mcast_ctx;

rcu_read_unlock();
@@ -701,7 +698,10 @@ br_multicast_port_ngroups_inc_one(struct net_bridge_mcast_port *pmctx,
u32 max = READ_ONCE(pmctx->mdb_max_entries);
u32 n = READ_ONCE(pmctx->mdb_n_entries);

- if (max && n >= max) {
+ /* enforce the max limit when it's a port pmctx or a port-vlan pmctx
+ * with snooping enabled
+ */
+ if (!br_multicast_port_ctx_vlan_disabled(pmctx) && max && n >= max) {
NL_SET_ERR_MSG_FMT_MOD(extack, "%s is already in %u groups, and mcast_max_groups=%u",
what, n, max);
return -E2BIG;
@@ -736,9 +736,7 @@ static int br_multicast_port_ngroups_inc(struct net_bridge_port *port,
return err;
}

- /* Only count on the VLAN context if VID is given, and if snooping on
- * that VLAN is enabled.
- */
+ /* Only count on the VLAN context if VID is given */
if (!group->vid)
return 0;

@@ -2010,6 +2008,18 @@ void br_multicast_port_ctx_init(struct net_bridge_port *port,
timer_setup(&pmctx->ip6_own_query.timer,
br_ip6_multicast_port_query_expired, 0);
#endif
+ /* initialize mdb_n_entries if a new port vlan is being created */
+ if (vlan) {
+ struct net_bridge_port_group *pg;
+ u32 n = 0;
+
+ spin_lock_bh(&port->br->multicast_lock);
+ hlist_for_each_entry(pg, &port->mglist, mglist)
+ if (pg->key.addr.vid == vlan->vid)
+ n++;
+ WRITE_ONCE(pmctx->mdb_n_entries, n);
+ spin_unlock_bh(&port->br->multicast_lock);
+ }
}

void br_multicast_port_ctx_deinit(struct net_bridge_mcast_port *pmctx)
@@ -2093,25 +2103,6 @@ static void __br_multicast_enable_port_ctx(struct net_bridge_mcast_port *pmctx)
br_ip4_multicast_add_router(brmctx, pmctx);
br_ip6_multicast_add_router(brmctx, pmctx);
}
-
- if (br_multicast_port_ctx_is_vlan(pmctx)) {
- struct net_bridge_port_group *pg;
- u32 n = 0;
-
- /* The mcast_n_groups counter might be wrong. First,
- * BR_VLFLAG_MCAST_ENABLED is toggled before temporary entries
- * are flushed, thus mcast_n_groups after the toggle does not
- * reflect the true values. And second, permanent entries added
- * while BR_VLFLAG_MCAST_ENABLED was disabled, are not reflected
- * either. Thus we have to refresh the counter.
- */
-
- hlist_for_each_entry(pg, &pmctx->port->mglist, mglist) {
- if (pg->key.addr.vid == pmctx->vlan->vid)
- n++;
- }
- WRITE_ONCE(pmctx->mdb_n_entries, n);
- }
}

static void br_multicast_enable_port_ctx(struct net_bridge_mcast_port *pmctx)
diff --git a/net/ceph/crypto.c b/net/ceph/crypto.c
index 051d22c0e4ad4..3397d105f74f9 100644
--- a/net/ceph/crypto.c
+++ b/net/ceph/crypto.c
@@ -37,9 +37,6 @@ static int set_secret(struct ceph_crypto_key *key, void *buf)
return -ENOTSUPP;
}

- if (!key->len)
- return -EINVAL;
-
key->key = kmemdup(buf, key->len, GFP_NOIO);
if (!key->key) {
ret = -ENOMEM;
@@ -95,6 +92,11 @@ int ceph_crypto_key_decode(struct ceph_crypto_key *key, void **p, void *end)
ceph_decode_copy(p, &key->created, sizeof(key->created));
key->len = ceph_decode_16(p);
ceph_decode_need(p, end, key->len, bad);
+ if (key->len > CEPH_MAX_KEY_LEN) {
+ pr_err("secret too big %d\n", key->len);
+ return -EINVAL;
+ }
+
ret = set_secret(key, *p);
memzero_explicit(*p, key->len);
*p += key->len;
diff --git a/net/ceph/crypto.h b/net/ceph/crypto.h
index 13bd526349fa1..0d32f1649f3d0 100644
--- a/net/ceph/crypto.h
+++ b/net/ceph/crypto.h
@@ -5,7 +5,7 @@
#include <linux/ceph/types.h>
#include <linux/ceph/buffer.h>

-#define CEPH_KEY_LEN 16
+#define CEPH_MAX_KEY_LEN 16
#define CEPH_MAX_CON_SECRET_LEN 64

/*
diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c
index f163283abc4e9..83c29714226fd 100644
--- a/net/ceph/messenger_v2.c
+++ b/net/ceph/messenger_v2.c
@@ -2389,7 +2389,7 @@ static int process_auth_reply_more(struct ceph_connection *con,
*/
static int process_auth_done(struct ceph_connection *con, void *p, void *end)
{
- u8 session_key_buf[CEPH_KEY_LEN + 16];
+ u8 session_key_buf[CEPH_MAX_KEY_LEN + 16];
u8 con_secret_buf[CEPH_MAX_CON_SECRET_LEN + 16];
u8 *session_key = PTR_ALIGN(&session_key_buf[0], 16);
u8 *con_secret = PTR_ALIGN(&con_secret_buf[0], 16);
diff --git a/net/core/dev.c b/net/core/dev.c
index 1d276a26a360d..e7127eca1afc5 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -720,7 +720,7 @@ static struct net_device_path *dev_fwd_path(struct net_device_path_stack *stack)
{
int k = stack->num_paths++;

- if (WARN_ON_ONCE(k >= NET_DEVICE_PATH_STACK_MAX))
+ if (k >= NET_DEVICE_PATH_STACK_MAX)
return NULL;

return &stack->path[k];
@@ -4514,6 +4514,8 @@ int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
* to -1 or to their cpu id, but not to our id.
*/
if (READ_ONCE(txq->xmit_lock_owner) != cpu) {
+ bool is_list = false;
+
if (dev_xmit_recursion())
goto recursion_alert;

@@ -4524,17 +4526,28 @@ int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
HARD_TX_LOCK(dev, txq, cpu);

if (!netif_xmit_stopped(txq)) {
+ is_list = !!skb->next;
+
dev_xmit_recursion_inc();
skb = dev_hard_start_xmit(skb, dev, txq, &rc);
dev_xmit_recursion_dec();
- if (dev_xmit_complete(rc)) {
- HARD_TX_UNLOCK(dev, txq);
- goto out;
- }
+
+ /* GSO segments a single SKB into
+ * a list of frames. TCP expects error
+ * to mean none of the data was sent.
+ */
+ if (is_list)
+ rc = NETDEV_TX_OK;
}
HARD_TX_UNLOCK(dev, txq);
+ if (!skb) /* xmit completed */
+ goto out;
+
net_crit_ratelimited("Virtual device %s asks to queue packet!\n",
dev->name);
+ /* NETDEV_TX_BUSY or queue was stopped */
+ if (!is_list)
+ rc = -ENETDOWN;
} else {
/* Recursion is detected! It is possible,
* unfortunately
@@ -4542,10 +4555,10 @@ int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
recursion_alert:
net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n",
dev->name);
+ rc = -ENETDOWN;
}
}

- rc = -ENETDOWN;
rcu_read_unlock_bh();

dev_core_stats_tx_dropped_inc(dev);
diff --git a/net/core/filter.c b/net/core/filter.c
index 06e179865a21b..182a7388e84f5 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -4140,7 +4140,7 @@ static const struct bpf_func_proto bpf_xdp_store_bytes_proto = {
.ret_type = RET_INTEGER,
.arg1_type = ARG_PTR_TO_CTX,
.arg2_type = ARG_ANYTHING,
- .arg3_type = ARG_PTR_TO_UNINIT_MEM,
+ .arg3_type = ARG_PTR_TO_MEM | MEM_RDONLY,
.arg4_type = ARG_CONST_SIZE,
};

diff --git a/net/core/gro.c b/net/core/gro.c
index 40aaac4e04f34..ac498c9f82cf5 100644
--- a/net/core/gro.c
+++ b/net/core/gro.c
@@ -416,7 +416,7 @@ static void gro_pull_from_frag0(struct sk_buff *skb, int grow)
{
struct skb_shared_info *pinfo = skb_shinfo(skb);

- BUG_ON(skb->end - skb->tail < grow);
+ DEBUG_NET_WARN_ON_ONCE(skb->end - skb->tail < grow);

memcpy(skb_tail_pointer(skb), NAPI_GRO_CB(skb)->frag0, grow);

diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index adb3166ede972..6ece4eaecd489 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -408,22 +408,26 @@ int sk_msg_memcopy_from_iter(struct sock *sk, struct iov_iter *from,
}
EXPORT_SYMBOL_GPL(sk_msg_memcopy_from_iter);

-/* Receive sk_msg from psock->ingress_msg to @msg. */
-int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
- int len, int flags)
+int __sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ int len, int flags, int *copied_from_self)
{
struct iov_iter *iter = &msg->msg_iter;
int peek = flags & MSG_PEEK;
struct sk_msg *msg_rx;
int i, copied = 0;
+ bool from_self;

msg_rx = sk_psock_peek_msg(psock);
+ if (copied_from_self)
+ *copied_from_self = 0;
+
while (copied != len) {
struct scatterlist *sge;

if (unlikely(!msg_rx))
break;

+ from_self = msg_rx->sk == sk;
i = msg_rx->sg.start;
do {
struct page *page;
@@ -442,6 +446,9 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
}

copied += copy;
+ if (from_self && copied_from_self)
+ *copied_from_self += copy;
+
if (likely(!peek)) {
sge->offset += copy;
sge->length -= copy;
@@ -450,6 +457,7 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
atomic_sub(copy, &sk->sk_rmem_alloc);
}
msg_rx->sg.size -= copy;
+ sk_psock_msg_len_add(psock, -copy);

if (!sge->length) {
sk_msg_iter_var_next(i);
@@ -486,6 +494,13 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
out:
return copied;
}
+
+/* Receive sk_msg from psock->ingress_msg to @msg. */
+int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ int len, int flags)
+{
+ return __sk_msg_recvmsg(sk, psock, msg, len, flags, NULL);
+}
EXPORT_SYMBOL_GPL(sk_msg_recvmsg);

bool sk_msg_is_readable(struct sock *sk)
@@ -615,6 +630,12 @@ static int sk_psock_skb_ingress_self(struct sk_psock *psock, struct sk_buff *skb
if (unlikely(!msg))
return -EAGAIN;
skb_set_owner_r(skb, sk);
+
+ /* This is used in tcp_bpf_recvmsg_parser() to determine whether the
+ * data originates from the socket's own protocol stack. No need to
+ * refcount sk because msg's lifetime is bound to sk via the ingress_msg.
+ */
+ msg->sk = sk;
err = sk_psock_skb_ingress_enqueue(skb, off, len, psock, sk, msg, take_ref);
if (err < 0)
kfree(msg);
@@ -800,9 +821,11 @@ static void __sk_psock_purge_ingress_msg(struct sk_psock *psock)
list_del(&msg->list);
if (!msg->skb)
atomic_sub(msg->sg.size, &psock->sk->sk_rmem_alloc);
+ sk_psock_msg_len_add(psock, -msg->sg.size);
sk_msg_free(psock->sk, msg);
kfree(msg);
}
+ WARN_ON_ONCE(psock->msg_tot_len);
}

static void __sk_psock_zap_ingress(struct sk_psock *psock)
@@ -908,6 +931,7 @@ int sk_psock_msg_verdict(struct sock *sk, struct sk_psock *psock,
sk_msg_compute_data_pointers(msg);
msg->sk = sk;
ret = bpf_prog_run_pin_on_cpu(prog, msg);
+ msg->sk = NULL;
ret = sk_psock_map_verd(ret, msg->sk_redir);
psock->apply_bytes = msg->apply_bytes;
if (ret == __SK_REDIRECT) {
diff --git a/net/ipv4/fib_lookup.h b/net/ipv4/fib_lookup.h
index f9b9e26c32c19..0b72796dd1ad3 100644
--- a/net/ipv4/fib_lookup.h
+++ b/net/ipv4/fib_lookup.h
@@ -28,8 +28,10 @@ struct fib_alias {
/* Don't write on fa_state unless needed, to keep it shared on all cpus */
static inline void fib_alias_accessed(struct fib_alias *fa)
{
- if (!(fa->fa_state & FA_S_ACCESSED))
- fa->fa_state |= FA_S_ACCESSED;
+ u8 fa_state = READ_ONCE(fa->fa_state);
+
+ if (!(fa_state & FA_S_ACCESSED))
+ WRITE_ONCE(fa->fa_state, fa_state | FA_S_ACCESSED);
}

/* Exported by fib_semantics.c */
diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
index 658f26d9a9ec5..ee47b67a08703 100644
--- a/net/ipv4/fib_trie.c
+++ b/net/ipv4/fib_trie.c
@@ -1286,7 +1286,7 @@ int fib_table_insert(struct net *net, struct fib_table *tb,
new_fa->fa_dscp = fa->fa_dscp;
new_fa->fa_info = fi;
new_fa->fa_type = cfg->fc_type;
- state = fa->fa_state;
+ state = READ_ONCE(fa->fa_state);
new_fa->fa_state = state & ~FA_S_ACCESSED;
new_fa->fa_slen = fa->fa_slen;
new_fa->tb_id = tb->tb_id;
@@ -1751,7 +1751,7 @@ int fib_table_delete(struct net *net, struct fib_table *tb,

fib_remove_alias(t, tp, l, fa_to_delete);

- if (fa_to_delete->fa_state & FA_S_ACCESSED)
+ if (READ_ONCE(fa_to_delete->fa_state) & FA_S_ACCESSED)
rt_cache_flush(cfg->fc_nlinfo.nl_net);

fib_release_info(fa_to_delete->fa_info);
diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
index 508b23204edc5..8ab51b51cc9b2 100644
--- a/net/ipv4/icmp.c
+++ b/net/ipv4/icmp.c
@@ -247,7 +247,8 @@ bool icmp_global_allow(struct net *net)
if (delta < HZ / 50)
return false;

- incr = READ_ONCE(net->ipv4.sysctl_icmp_msgs_per_sec) * delta / HZ;
+ incr = READ_ONCE(net->ipv4.sysctl_icmp_msgs_per_sec);
+ incr = div_u64((u64)incr * delta, HZ);
if (!incr)
return false;

@@ -546,14 +547,30 @@ static struct rtable *icmp_route_lookup(struct net *net, struct flowi4 *fl4,
goto relookup_failed;
}
/* Ugh! */
- orefdst = skb_in->_skb_refdst; /* save old refdst */
- skb_dst_set(skb_in, NULL);
+ orefdst = skb_dstref_steal(skb_in);
err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr,
dscp, rt2->dst.dev);

dst_release(&rt2->dst);
rt2 = skb_rtable(skb_in);
- skb_in->_skb_refdst = orefdst; /* restore old refdst */
+ /* steal dst entry from skb_in, don't drop refcnt */
+ skb_dstref_steal(skb_in);
+ skb_dstref_restore(skb_in, orefdst);
+
+ /*
+ * At this point, fl4_dec.daddr should NOT be local (we
+ * checked fl4_dec.saddr above). However, a race condition
+ * may occur if the address is added to the interface
+ * concurrently. In that case, ip_route_input() returns a
+ * LOCAL route with dst.output=ip_rt_bug, which must not
+ * be used for output.
+ */
+ if (!err && rt2 && rt2->rt_type == RTN_LOCAL) {
+ net_warn_ratelimited("detected local route for %pI4 during ICMP sending, src %pI4\n",
+ &fl4_dec.daddr, &fl4_dec.saddr);
+ dst_release(&rt2->dst);
+ err = -EINVAL;
+ }
}

if (err)
@@ -840,16 +857,22 @@ static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
/* Checkin full IP header plus 8 bytes of protocol to
* avoid additional coding at protocol handlers.
*/
- if (!pskb_may_pull(skb, iph->ihl * 4 + 8)) {
- __ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS);
- return;
- }
+ if (!pskb_may_pull(skb, iph->ihl * 4 + 8))
+ goto out;
+
+ /* IPPROTO_RAW sockets are not supposed to receive anything. */
+ if (protocol == IPPROTO_RAW)
+ goto out;

raw_icmp_error(skb, protocol, info);

ipprot = rcu_dereference(inet_protos[protocol]);
if (ipprot && ipprot->err_handler)
ipprot->err_handler(skb, info);
+ return;
+
+out:
+ __ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS);
}

static bool icmp_tag_validation(int proto)
diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
index f4a87b90351e9..52c661c6749f3 100644
--- a/net/ipv4/igmp.c
+++ b/net/ipv4/igmp.c
@@ -224,7 +224,7 @@ static void igmp_start_timer(struct ip_mc_list *im, int max_delay)

static void igmp_gq_start_timer(struct in_device *in_dev)
{
- int tv = get_random_u32_below(in_dev->mr_maxdelay);
+ int tv = get_random_u32_below(READ_ONCE(in_dev->mr_maxdelay));
unsigned long exp = jiffies + tv + 2;

if (in_dev->mr_gq_running &&
@@ -1006,7 +1006,7 @@ static bool igmp_heard_query(struct in_device *in_dev, struct sk_buff *skb,
max_delay = IGMPV3_MRC(ih3->code)*(HZ/IGMP_TIMER_SCALE);
if (!max_delay)
max_delay = 1; /* can't mod w/ 0 */
- in_dev->mr_maxdelay = max_delay;
+ WRITE_ONCE(in_dev->mr_maxdelay, max_delay);

/* RFC3376, 4.1.6. QRV and 4.1.7. QQIC, when the most recently
* received value was zero, use the default or statically
diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
index 81e86e5defee6..3d154bc7e1f2e 100644
--- a/net/ipv4/ip_options.c
+++ b/net/ipv4/ip_options.c
@@ -615,14 +615,13 @@ int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev)
}
memcpy(&nexthop, &optptr[srrptr-1], 4);

- orefdst = skb->_skb_refdst;
- skb_dst_set(skb, NULL);
+ orefdst = skb_dstref_steal(skb);
err = ip_route_input(skb, nexthop, iph->saddr, ip4h_dscp(iph),
dev);
rt2 = skb_rtable(skb);
if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) {
skb_dst_drop(skb);
- skb->_skb_refdst = orefdst;
+ skb_dstref_restore(skb, orefdst);
return -EINVAL;
}
refdst_drop(orefdst);
diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
index f62b17f59bb4a..0089c1605acfe 100644
--- a/net/ipv4/ping.c
+++ b/net/ipv4/ping.c
@@ -159,7 +159,7 @@ void ping_unhash(struct sock *sk)
pr_debug("ping_unhash(isk=%p,isk->num=%u)\n", isk, isk->inet_num);
spin_lock(&ping_table.lock);
if (sk_del_node_init_rcu(sk)) {
- isk->inet_num = 0;
+ WRITE_ONCE(isk->inet_num, 0);
isk->inet_sport = 0;
sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
}
@@ -192,31 +192,35 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
}

sk_for_each_rcu(sk, hslot) {
+ int bound_dev_if;
+
if (!net_eq(sock_net(sk), net))
continue;
isk = inet_sk(sk);

pr_debug("iterate\n");
- if (isk->inet_num != ident)
+ if (READ_ONCE(isk->inet_num) != ident)
continue;

+ bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
if (skb->protocol == htons(ETH_P_IP) &&
sk->sk_family == AF_INET) {
+ __be32 rcv_saddr = READ_ONCE(isk->inet_rcv_saddr);
+
pr_debug("found: %p: num=%d, daddr=%pI4, dif=%d\n", sk,
- (int) isk->inet_num, &isk->inet_rcv_saddr,
- sk->sk_bound_dev_if);
+ ident, &rcv_saddr,
+ bound_dev_if);

- if (isk->inet_rcv_saddr &&
- isk->inet_rcv_saddr != ip_hdr(skb)->daddr)
+ if (rcv_saddr && rcv_saddr != ip_hdr(skb)->daddr)
continue;
#if IS_ENABLED(CONFIG_IPV6)
} else if (skb->protocol == htons(ETH_P_IPV6) &&
sk->sk_family == AF_INET6) {

pr_debug("found: %p: num=%d, daddr=%pI6c, dif=%d\n", sk,
- (int) isk->inet_num,
+ ident,
&sk->sk_v6_rcv_saddr,
- sk->sk_bound_dev_if);
+ bound_dev_if);

if (!ipv6_addr_any(&sk->sk_v6_rcv_saddr) &&
!ipv6_addr_equal(&sk->sk_v6_rcv_saddr,
@@ -227,8 +231,8 @@ static struct sock *ping_lookup(struct net *net, struct sk_buff *skb, u16 ident)
continue;
}

- if (sk->sk_bound_dev_if && sk->sk_bound_dev_if != dif &&
- sk->sk_bound_dev_if != sdif)
+ if (bound_dev_if && bound_dev_if != dif &&
+ bound_dev_if != sdif)
continue;

goto exit;
@@ -403,7 +407,9 @@ static void ping_set_saddr(struct sock *sk, struct sockaddr *saddr)
if (saddr->sa_family == AF_INET) {
struct inet_sock *isk = inet_sk(sk);
struct sockaddr_in *addr = (struct sockaddr_in *) saddr;
- isk->inet_rcv_saddr = isk->inet_saddr = addr->sin_addr.s_addr;
+
+ isk->inet_saddr = addr->sin_addr.s_addr;
+ WRITE_ONCE(isk->inet_rcv_saddr, addr->sin_addr.s_addr);
#if IS_ENABLED(CONFIG_IPV6)
} else if (saddr->sa_family == AF_INET6) {
struct sockaddr_in6 *addr = (struct sockaddr_in6 *) saddr;
@@ -860,7 +866,8 @@ int ping_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags,
struct sk_buff *skb;
int copied, err;

- pr_debug("ping_recvmsg(sk=%p,sk->num=%u)\n", isk, isk->inet_num);
+ pr_debug("ping_recvmsg(sk=%p,sk->num=%u)\n", isk,
+ READ_ONCE(isk->inet_num));

err = -EOPNOTSUPP;
if (flags & MSG_OOB)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index ad5f30cefdf96..4090107b0c4d5 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -481,6 +481,9 @@ static void tcp_tx_timestamp(struct sock *sk, u16 tsflags)
{
struct sk_buff *skb = tcp_write_queue_tail(sk);

+ if (unlikely(!skb))
+ skb = skb_rb_last(&sk->tcp_rtx_queue);
+
if (tsflags && skb) {
struct skb_shared_info *shinfo = skb_shinfo(skb);
struct tcp_skb_cb *tcb = TCP_SKB_CB(skb);
diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
index 8372ca512a755..f5817438f1734 100644
--- a/net/ipv4/tcp_bpf.c
+++ b/net/ipv4/tcp_bpf.c
@@ -10,6 +10,7 @@

#include <net/inet_common.h>
#include <net/tls.h>
+#include <asm/ioctls.h>

void tcp_eat_skb(struct sock *sk, struct sk_buff *skb)
{
@@ -226,6 +227,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
int peek = flags & MSG_PEEK;
struct sk_psock *psock;
struct tcp_sock *tcp;
+ int copied_from_self = 0;
int copied = 0;
u32 seq;

@@ -262,7 +264,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
}

msg_bytes_ready:
- copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
+ copied = __sk_msg_recvmsg(sk, psock, msg, len, flags, &copied_from_self);
/* The typical case for EFAULT is the socket was gracefully
* shutdown with a FIN pkt. So check here the other case is
* some error on copy_page_to_iter which would be unexpected.
@@ -277,7 +279,7 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
goto out;
}
}
- seq += copied;
+ seq += copied_from_self;
if (!copied) {
long timeo;
int data;
@@ -331,6 +333,24 @@ static int tcp_bpf_recvmsg_parser(struct sock *sk,
return copied;
}

+static int tcp_bpf_ioctl(struct sock *sk, int cmd, int *karg)
+{
+ bool slow;
+
+ if (cmd != SIOCINQ)
+ return tcp_ioctl(sk, cmd, karg);
+
+ /* works similar as tcp_ioctl */
+ if (sk->sk_state == TCP_LISTEN)
+ return -EINVAL;
+
+ slow = lock_sock_fast(sk);
+ *karg = sk_psock_msg_inq(sk);
+ unlock_sock_fast(sk, slow);
+
+ return 0;
+}
+
static int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
int flags, int *addr_len)
{
@@ -609,6 +629,7 @@ static void tcp_bpf_rebuild_protos(struct proto prot[TCP_BPF_NUM_CFGS],
prot[TCP_BPF_BASE].close = sock_map_close;
prot[TCP_BPF_BASE].recvmsg = tcp_bpf_recvmsg;
prot[TCP_BPF_BASE].sock_is_readable = sk_msg_is_readable;
+ prot[TCP_BPF_BASE].ioctl = tcp_bpf_ioctl;

prot[TCP_BPF_TX] = prot[TCP_BPF_BASE];
prot[TCP_BPF_TX].sendmsg = tcp_bpf_sendmsg;
diff --git a/net/ipv4/udp_bpf.c b/net/ipv4/udp_bpf.c
index 0735d820e413f..91233e37cd97a 100644
--- a/net/ipv4/udp_bpf.c
+++ b/net/ipv4/udp_bpf.c
@@ -5,6 +5,7 @@
#include <net/sock.h>
#include <net/udp.h>
#include <net/inet_common.h>
+#include <asm/ioctls.h>

#include "udp_impl.h"

@@ -111,12 +112,26 @@ enum {
static DEFINE_SPINLOCK(udpv6_prot_lock);
static struct proto udp_bpf_prots[UDP_BPF_NUM_PROTS];

+static int udp_bpf_ioctl(struct sock *sk, int cmd, int *karg)
+{
+ if (cmd != SIOCINQ)
+ return udp_ioctl(sk, cmd, karg);
+
+ /* Since we don't hold a lock, sk_receive_queue may contain data.
+ * BPF might only be processing this data at the moment. We only
+ * care about the data in the ingress_msg here.
+ */
+ *karg = sk_msg_first_len(sk);
+ return 0;
+}
+
static void udp_bpf_rebuild_protos(struct proto *prot, const struct proto *base)
{
- *prot = *base;
- prot->close = sock_map_close;
- prot->recvmsg = udp_bpf_recvmsg;
- prot->sock_is_readable = sk_msg_is_readable;
+ *prot = *base;
+ prot->close = sock_map_close;
+ prot->recvmsg = udp_bpf_recvmsg;
+ prot->sock_is_readable = sk_msg_is_readable;
+ prot->ioctl = udp_bpf_ioctl;
}

static void udp_bpf_check_v6_needs_rebuild(struct proto *ops)
diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
index f60ec8b0f8ea4..0786155715249 100644
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -224,8 +224,8 @@ static int inet6_create(struct net *net, struct socket *sock, int protocol,
inet6_set_bit(MC6_LOOP, sk);
inet6_set_bit(MC6_ALL, sk);
np->pmtudisc = IPV6_PMTUDISC_WANT;
- inet6_assign_bit(REPFLOW, sk, net->ipv6.sysctl.flowlabel_reflect &
- FLOWLABEL_REFLECT_ESTABLISHED);
+ inet6_assign_bit(REPFLOW, sk, READ_ONCE(net->ipv6.sysctl.flowlabel_reflect) &
+ FLOWLABEL_REFLECT_ESTABLISHED);
sk->sk_ipv6only = net->ipv6.sysctl.bindv6only;
sk->sk_txrehash = READ_ONCE(net->core.sysctl_txrehash);

diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
index 20f77fdfb73da..1a627c24e4c30 100644
--- a/net/ipv6/exthdrs.c
+++ b/net/ipv6/exthdrs.c
@@ -314,7 +314,7 @@ static int ipv6_destopt_rcv(struct sk_buff *skb)
}

extlen = (skb_transport_header(skb)[1] + 1) << 3;
- if (extlen > net->ipv6.sysctl.max_dst_opts_len)
+ if (extlen > READ_ONCE(net->ipv6.sysctl.max_dst_opts_len))
goto fail_and_free;

opt->lastopt = opt->dst1 = skb_network_header_len(skb);
@@ -322,7 +322,8 @@ static int ipv6_destopt_rcv(struct sk_buff *skb)
dstbuf = opt->dst1;
#endif

- if (ip6_parse_tlv(false, skb, net->ipv6.sysctl.max_dst_opts_cnt)) {
+ if (ip6_parse_tlv(false, skb,
+ READ_ONCE(net->ipv6.sysctl.max_dst_opts_cnt))) {
skb->transport_header += extlen;
opt = IP6CB(skb);
#if IS_ENABLED(CONFIG_IPV6_MIP6)
@@ -932,6 +933,11 @@ static bool ipv6_hop_ioam(struct sk_buff *skb, int optoff)
if (hdr->opt_len < 2 + sizeof(*trace) + trace->remlen * 4)
goto drop;

+ /* Inconsistent Pre-allocated Trace header */
+ if (trace->nodelen !=
+ ioam6_trace_compute_nodelen(be32_to_cpu(trace->type_be32)))
+ goto drop;
+
/* Ignore if the IOAM namespace is unknown */
ns = ioam6_namespace(dev_net(skb->dev), trace->namespace_id);
if (!ns)
@@ -1051,11 +1057,12 @@ int ipv6_parse_hopopts(struct sk_buff *skb)
}

extlen = (skb_transport_header(skb)[1] + 1) << 3;
- if (extlen > net->ipv6.sysctl.max_hbh_opts_len)
+ if (extlen > READ_ONCE(net->ipv6.sysctl.max_hbh_opts_len))
goto fail_and_free;

opt->flags |= IP6SKB_HOPBYHOP;
- if (ip6_parse_tlv(true, skb, net->ipv6.sysctl.max_hbh_opts_cnt)) {
+ if (ip6_parse_tlv(true, skb,
+ READ_ONCE(net->ipv6.sysctl.max_hbh_opts_cnt))) {
skb->transport_header += extlen;
opt = IP6CB(skb);
opt->nhoff = sizeof(struct ipv6hdr);
diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
index 13a796bfc2f93..e43b49f1ddbb2 100644
--- a/net/ipv6/icmp.c
+++ b/net/ipv6/icmp.c
@@ -763,7 +763,8 @@ static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb)
tmp_hdr.icmp6_type = type;

memset(&fl6, 0, sizeof(fl6));
- if (net->ipv6.sysctl.flowlabel_reflect & FLOWLABEL_REFLECT_ICMPV6_ECHO_REPLIES)
+ if (READ_ONCE(net->ipv6.sysctl.flowlabel_reflect) &
+ FLOWLABEL_REFLECT_ICMPV6_ECHO_REPLIES)
fl6.flowlabel = ip6_flowlabel(ipv6_hdr(skb));

fl6.flowi6_proto = IPPROTO_ICMPV6;
@@ -871,6 +872,12 @@ enum skb_drop_reason icmpv6_notify(struct sk_buff *skb, u8 type,
if (reason != SKB_NOT_DROPPED_YET)
goto out;

+ if (nexthdr == IPPROTO_RAW) {
+ /* Add a more specific reason later ? */
+ reason = SKB_DROP_REASON_NOT_SPECIFIED;
+ goto out;
+ }
+
/* BUGGG_FUTURE: we should try to parse exthdrs in this packet.
Without this we will not able f.e. to make source routed
pmtu discovery.
diff --git a/net/ipv6/ioam6.c b/net/ipv6/ioam6.c
index 08c9295130657..b6ed67d7027c1 100644
--- a/net/ipv6/ioam6.c
+++ b/net/ipv6/ioam6.c
@@ -694,6 +694,20 @@ struct ioam6_namespace *ioam6_namespace(struct net *net, __be16 id)
return rhashtable_lookup_fast(&nsdata->namespaces, &id, rht_ns_params);
}

+#define IOAM6_MASK_SHORT_FIELDS 0xff1ffc00
+#define IOAM6_MASK_WIDE_FIELDS 0x00e00000
+
+u8 ioam6_trace_compute_nodelen(u32 trace_type)
+{
+ u8 nodelen = hweight32(trace_type & IOAM6_MASK_SHORT_FIELDS)
+ * (sizeof(__be32) / 4);
+
+ nodelen += hweight32(trace_type & IOAM6_MASK_WIDE_FIELDS)
+ * (sizeof(__be64) / 4);
+
+ return nodelen;
+}
+
static void __ioam6_fill_trace_data(struct sk_buff *skb,
struct ioam6_namespace *ns,
struct ioam6_trace_hdr *trace,
diff --git a/net/ipv6/ioam6_iptunnel.c b/net/ipv6/ioam6_iptunnel.c
index 9e4774771023b..7cbff7a0501bb 100644
--- a/net/ipv6/ioam6_iptunnel.c
+++ b/net/ipv6/ioam6_iptunnel.c
@@ -22,9 +22,6 @@
#include <net/ip6_route.h>
#include <net/addrconf.h>

-#define IOAM6_MASK_SHORT_FIELDS 0xff100000
-#define IOAM6_MASK_WIDE_FIELDS 0xe00000
-
struct ioam6_lwt_encap {
struct ipv6_hopopt_hdr eh;
u8 pad[2]; /* 2-octet padding for 4n-alignment */
@@ -92,13 +89,8 @@ static bool ioam6_validate_trace_hdr(struct ioam6_trace_hdr *trace)
trace->type.bit21 | trace->type.bit23)
return false;

- trace->nodelen = 0;
fields = be32_to_cpu(trace->type_be32);
-
- trace->nodelen += hweight32(fields & IOAM6_MASK_SHORT_FIELDS)
- * (sizeof(__be32) / 4);
- trace->nodelen += hweight32(fields & IOAM6_MASK_WIDE_FIELDS)
- * (sizeof(__be64) / 4);
+ trace->nodelen = ioam6_trace_compute_nodelen(fields);

return true;
}
diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
index d83430f4a0eff..01c953a39211a 100644
--- a/net/ipv6/ip6_fib.c
+++ b/net/ipv6/ip6_fib.c
@@ -1139,7 +1139,7 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
fib6_add_gc_list(iter);
}
if (!(rt->fib6_flags & (RTF_ADDRCONF | RTF_PREFIX_RT)) &&
- !iter->fib6_nh->fib_nh_gw_family) {
+ (iter->nh || !iter->fib6_nh->fib_nh_gw_family)) {
iter->fib6_flags &= ~RTF_ADDRCONF;
iter->fib6_flags &= ~RTF_PREFIX_RT;
}
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 06edfe0b2db90..c90b20c218bff 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1123,7 +1123,8 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb,
txhash = inet_twsk(sk)->tw_txhash;
}
} else {
- if (net->ipv6.sysctl.flowlabel_reflect & FLOWLABEL_REFLECT_TCP_RESET)
+ if (READ_ONCE(net->ipv6.sysctl.flowlabel_reflect) &
+ FLOWLABEL_REFLECT_TCP_RESET)
label = ip6_flowlabel(ipv6h);
}

diff --git a/net/ipv6/xfrm6_policy.c b/net/ipv6/xfrm6_policy.c
index 1f19b6f14484c..125ea9a5b8a08 100644
--- a/net/ipv6/xfrm6_policy.c
+++ b/net/ipv6/xfrm6_policy.c
@@ -57,6 +57,7 @@ static int xfrm6_get_saddr(xfrm_address_t *saddr,
struct dst_entry *dst;
struct net_device *dev;
struct inet6_dev *idev;
+ int err;

dst = xfrm6_dst_lookup(params);
if (IS_ERR(dst))
@@ -68,9 +69,11 @@ static int xfrm6_get_saddr(xfrm_address_t *saddr,
return -EHOSTUNREACH;
}
dev = idev->dev;
- ipv6_dev_get_saddr(dev_net(dev), dev, &params->daddr->in6, 0,
- &saddr->in6);
+ err = ipv6_dev_get_saddr(dev_net(dev), dev, &params->daddr->in6, 0,
+ &saddr->in6);
dst_release(dst);
+ if (err)
+ return -EHOSTUNREACH;
return 0;
}

diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
index 1d37b26ea2ef7..4e34311b27b18 100644
--- a/net/kcm/kcmsock.c
+++ b/net/kcm/kcmsock.c
@@ -627,7 +627,7 @@ static int kcm_write_msgs(struct kcm_sock *kcm)
skb = txm->frag_skb;
}

- if (WARN_ON(!skb_shinfo(skb)->nr_frags) ||
+ if (WARN_ON_ONCE(!skb_shinfo(skb)->nr_frags) ||
WARN_ON_ONCE(!skb_frag_page(&skb_shinfo(skb)->frags[0]))) {
ret = -EINVAL;
goto out;
@@ -748,7 +748,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
{
struct sock *sk = sock->sk;
struct kcm_sock *kcm = kcm_sk(sk);
- struct sk_buff *skb = NULL, *head = NULL;
+ struct sk_buff *skb = NULL, *head = NULL, *frag_prev = NULL;
size_t copy, copied = 0;
long timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
int eor = (sock->type == SOCK_DGRAM) ?
@@ -823,6 +823,7 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
else
skb->next = tskb;

+ frag_prev = skb;
skb = tskb;
skb->ip_summed = CHECKSUM_UNNECESSARY;
continue;
@@ -933,6 +934,22 @@ static int kcm_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
out_error:
kcm_push(kcm);

+ /* When MAX_SKB_FRAGS was reached, a new skb was allocated and
+ * linked into the frag_list before data copy. If the copy
+ * subsequently failed, this skb has zero frags. Remove it from
+ * the frag_list to prevent kcm_write_msgs from later hitting
+ * WARN_ON(!skb_shinfo(skb)->nr_frags).
+ */
+ if (frag_prev && !skb_shinfo(skb)->nr_frags) {
+ if (head == frag_prev)
+ skb_shinfo(head)->frag_list = NULL;
+ else
+ frag_prev->next = NULL;
+ kfree_skb(skb);
+ /* Update skb as it may be saved in partial_message via goto */
+ skb = frag_prev;
+ }
+
if (sock->type == SOCK_SEQPACKET) {
/* Wrote some bytes before encountering an
* error, return partial success.
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index f2bf78c019df4..e682d52a06b7e 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -2085,8 +2085,8 @@ static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied)

msk->rcvq_space.copied += copied;

- mstamp = div_u64(tcp_clock_ns(), NSEC_PER_USEC);
- time = tcp_stamp_us_delta(mstamp, msk->rcvq_space.time);
+ mstamp = mptcp_stamp();
+ time = tcp_stamp_us_delta(mstamp, READ_ONCE(msk->rcvq_space.time));

rtt_us = msk->rcvq_space.rtt_us;
if (rtt_us && time < (rtt_us >> 3))
@@ -3493,6 +3493,7 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
__mptcp_propagate_sndbuf(nsk, ssk);

mptcp_rcv_space_init(msk, ssk);
+ msk->rcvq_space.time = mptcp_stamp();

if (mp_opt->suboptions & OPTION_MPTCP_MPC_ACK)
__mptcp_subflow_fully_established(msk, subflow, mp_opt);
@@ -3510,8 +3511,6 @@ void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk)
msk->rcvq_space.copied = 0;
msk->rcvq_space.rtt_us = 0;

- msk->rcvq_space.time = tp->tcp_mstamp;
-
/* initial rcv_space offering made to peer */
msk->rcvq_space.space = min_t(u32, tp->rcv_wnd,
TCP_INIT_CWND * tp->advmss);
@@ -3727,6 +3726,7 @@ void mptcp_finish_connect(struct sock *ssk)
* accessing the field below
*/
WRITE_ONCE(msk->local_key, subflow->local_key);
+ WRITE_ONCE(msk->rcvq_space.time, mptcp_stamp());

mptcp_pm_new_connection(msk, ssk, 0);
}
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index bdec5ad9defb9..b266002660d70 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -861,6 +861,11 @@ static inline bool mptcp_is_fully_established(struct sock *sk)
READ_ONCE(mptcp_sk(sk)->fully_established);
}

+static inline u64 mptcp_stamp(void)
+{
+ return div_u64(tcp_clock_ns(), NSEC_PER_USEC);
+}
+
void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk);
void mptcp_data_ready(struct sock *sk, struct sock *ssk);
bool mptcp_finish_join(struct sock *sk);
diff --git a/net/netfilter/ipvs/ip_vs_xmit.c b/net/netfilter/ipvs/ip_vs_xmit.c
index fa2db17f6298b..8892f261451e9 100644
--- a/net/netfilter/ipvs/ip_vs_xmit.c
+++ b/net/netfilter/ipvs/ip_vs_xmit.c
@@ -295,6 +295,12 @@ static inline bool decrement_ttl(struct netns_ipvs *ipvs,
return true;
}

+/* rt has device that is down */
+static bool rt_dev_is_down(const struct net_device *dev)
+{
+ return dev && !netif_running(dev);
+}
+
/* Get route to destination or remote server */
static int
__ip_vs_get_out_rt(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb,
@@ -310,9 +316,11 @@ __ip_vs_get_out_rt(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb,

if (dest) {
dest_dst = __ip_vs_dst_check(dest);
- if (likely(dest_dst))
+ if (likely(dest_dst)) {
rt = dst_rtable(dest_dst->dst_cache);
- else {
+ if (ret_saddr)
+ *ret_saddr = dest_dst->dst_saddr.ip;
+ } else {
dest_dst = ip_vs_dest_dst_alloc();
spin_lock_bh(&dest->dst_lock);
if (!dest_dst) {
@@ -328,14 +336,22 @@ __ip_vs_get_out_rt(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb,
ip_vs_dest_dst_free(dest_dst);
goto err_unreach;
}
- __ip_vs_dst_set(dest, dest_dst, &rt->dst, 0);
+ /* It is forbidden to attach dest->dest_dst if
+ * device is going down.
+ */
+ if (!rt_dev_is_down(dst_dev_rcu(&rt->dst)))
+ __ip_vs_dst_set(dest, dest_dst, &rt->dst, 0);
+ else
+ noref = 0;
spin_unlock_bh(&dest->dst_lock);
IP_VS_DBG(10, "new dst %pI4, src %pI4, refcnt=%d\n",
&dest->addr.ip, &dest_dst->dst_saddr.ip,
rcuref_read(&rt->dst.__rcuref));
+ if (ret_saddr)
+ *ret_saddr = dest_dst->dst_saddr.ip;
+ if (!noref)
+ ip_vs_dest_dst_free(dest_dst);
}
- if (ret_saddr)
- *ret_saddr = dest_dst->dst_saddr.ip;
} else {
noref = 0;

@@ -472,9 +488,11 @@ __ip_vs_get_out_rt_v6(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb,

if (dest) {
dest_dst = __ip_vs_dst_check(dest);
- if (likely(dest_dst))
+ if (likely(dest_dst)) {
rt = dst_rt6_info(dest_dst->dst_cache);
- else {
+ if (ret_saddr)
+ *ret_saddr = dest_dst->dst_saddr.in6;
+ } else {
u32 cookie;

dest_dst = ip_vs_dest_dst_alloc();
@@ -495,14 +513,22 @@ __ip_vs_get_out_rt_v6(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb,
}
rt = dst_rt6_info(dst);
cookie = rt6_get_cookie(rt);
- __ip_vs_dst_set(dest, dest_dst, &rt->dst, cookie);
+ /* It is forbidden to attach dest->dest_dst if
+ * device is going down.
+ */
+ if (!rt_dev_is_down(dst_dev_rcu(&rt->dst)))
+ __ip_vs_dst_set(dest, dest_dst, &rt->dst, cookie);
+ else
+ noref = 0;
spin_unlock_bh(&dest->dst_lock);
IP_VS_DBG(10, "new dst %pI6, src %pI6, refcnt=%d\n",
&dest->addr.in6, &dest_dst->dst_saddr.in6,
rcuref_read(&rt->dst.__rcuref));
+ if (ret_saddr)
+ *ret_saddr = dest_dst->dst_saddr.in6;
+ if (!noref)
+ ip_vs_dest_dst_free(dest_dst);
}
- if (ret_saddr)
- *ret_saddr = dest_dst->dst_saddr.in6;
} else {
noref = 0;
dst = __ip_vs_route_output_v6(net, daddr, ret_saddr, do_xfrm,
diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
index 828d5c64c68a3..14e62b3263cd9 100644
--- a/net/netfilter/nf_conncount.c
+++ b/net/netfilter/nf_conncount.c
@@ -34,8 +34,9 @@

#define CONNCOUNT_SLOTS 256U

-#define CONNCOUNT_GC_MAX_NODES 8
-#define MAX_KEYLEN 5
+#define CONNCOUNT_GC_MAX_NODES 8
+#define CONNCOUNT_GC_MAX_COLLECT 64
+#define MAX_KEYLEN 5

/* we will save the tuples of all connections we care about */
struct nf_conncount_tuple {
@@ -178,16 +179,28 @@ static int __nf_conncount_add(struct net *net,
return -ENOENT;

if (ct && nf_ct_is_confirmed(ct)) {
- err = -EEXIST;
- goto out_put;
+ /* local connections are confirmed in postrouting so confirmation
+ * might have happened before hitting connlimit
+ */
+ if (skb->skb_iif != LOOPBACK_IFINDEX) {
+ err = -EEXIST;
+ goto out_put;
+ }
+
+ /* this is likely a local connection, skip optimization to avoid
+ * adding duplicates from a 'packet train'
+ */
+ goto check_connections;
}

- if ((u32)jiffies == list->last_gc)
+ if ((u32)jiffies == list->last_gc &&
+ (list->count - list->last_gc_count) < CONNCOUNT_GC_MAX_COLLECT)
goto add_new_node;

+check_connections:
/* check the saved connections */
list_for_each_entry_safe(conn, conn_n, &list->head, node) {
- if (collect > CONNCOUNT_GC_MAX_NODES)
+ if (collect > CONNCOUNT_GC_MAX_COLLECT)
break;

found = find_or_evict(net, list, conn);
@@ -230,6 +243,7 @@ static int __nf_conncount_add(struct net *net,
nf_ct_put(found_ct);
}
list->last_gc = (u32)jiffies;
+ list->last_gc_count = list->count;

add_new_node:
if (WARN_ON_ONCE(list->count > INT_MAX)) {
@@ -277,13 +291,14 @@ void nf_conncount_list_init(struct nf_conncount_list *list)
spin_lock_init(&list->list_lock);
INIT_LIST_HEAD(&list->head);
list->count = 0;
+ list->last_gc_count = 0;
list->last_gc = (u32)jiffies;
}
EXPORT_SYMBOL_GPL(nf_conncount_list_init);

/* Return true if the list is empty. Must be called with BH disabled. */
-bool nf_conncount_gc_list(struct net *net,
- struct nf_conncount_list *list)
+static bool __nf_conncount_gc_list(struct net *net,
+ struct nf_conncount_list *list)
{
const struct nf_conntrack_tuple_hash *found;
struct nf_conncount_tuple *conn, *conn_n;
@@ -295,10 +310,6 @@ bool nf_conncount_gc_list(struct net *net,
if ((u32)jiffies == READ_ONCE(list->last_gc))
return false;

- /* don't bother if other cpu is already doing GC */
- if (!spin_trylock(&list->list_lock))
- return false;
-
list_for_each_entry_safe(conn, conn_n, &list->head, node) {
found = find_or_evict(net, list, conn);
if (IS_ERR(found)) {
@@ -320,14 +331,29 @@ bool nf_conncount_gc_list(struct net *net,
}

nf_ct_put(found_ct);
- if (collected > CONNCOUNT_GC_MAX_NODES)
+ if (collected > CONNCOUNT_GC_MAX_COLLECT)
break;
}

if (!list->count)
ret = true;
list->last_gc = (u32)jiffies;
- spin_unlock(&list->list_lock);
+ list->last_gc_count = list->count;
+
+ return ret;
+}
+
+bool nf_conncount_gc_list(struct net *net,
+ struct nf_conncount_list *list)
+{
+ bool ret;
+
+ /* don't bother if other cpu is already doing GC */
+ if (!spin_trylock_bh(&list->list_lock))
+ return false;
+
+ ret = __nf_conncount_gc_list(net, list);
+ spin_unlock_bh(&list->list_lock);

return ret;
}
diff --git a/net/netfilter/nf_conntrack_h323_asn1.c b/net/netfilter/nf_conntrack_h323_asn1.c
index 540d97715bd23..62aa22a078769 100644
--- a/net/netfilter/nf_conntrack_h323_asn1.c
+++ b/net/netfilter/nf_conntrack_h323_asn1.c
@@ -796,7 +796,7 @@ static int decode_choice(struct bitstr *bs, const struct field_t *f,

if (ext || (son->attr & OPEN)) {
BYTE_ALIGN(bs);
- if (nf_h323_error_boundary(bs, len, 0))
+ if (nf_h323_error_boundary(bs, 2, 0))
return H323_ERROR_BOUND;
len = get_len(bs);
if (nf_h323_error_boundary(bs, len, 0))
diff --git a/net/netfilter/nf_conntrack_h323_main.c b/net/netfilter/nf_conntrack_h323_main.c
index 5a9bce24f3c3d..ed983421e2eb2 100644
--- a/net/netfilter/nf_conntrack_h323_main.c
+++ b/net/netfilter/nf_conntrack_h323_main.c
@@ -1186,13 +1186,13 @@ static struct nf_conntrack_expect *find_expect(struct nf_conn *ct,
{
struct net *net = nf_ct_net(ct);
struct nf_conntrack_expect *exp;
- struct nf_conntrack_tuple tuple;
+ struct nf_conntrack_tuple tuple = {
+ .src.l3num = nf_ct_l3num(ct),
+ .dst.protonum = IPPROTO_TCP,
+ .dst.u.tcp.port = port,
+ };

- memset(&tuple.src.u3, 0, sizeof(tuple.src.u3));
- tuple.src.u.tcp.port = 0;
memcpy(&tuple.dst.u3, addr, sizeof(tuple.dst.u3));
- tuple.dst.u.tcp.port = port;
- tuple.dst.protonum = IPPROTO_TCP;

exp = __nf_ct_expect_find(net, nf_ct_zone(ct), &tuple);
if (exp && exp->master == ct)
diff --git a/net/netfilter/nf_conntrack_proto_generic.c b/net/netfilter/nf_conntrack_proto_generic.c
index e831637bc8ca8..cb260eb3d012c 100644
--- a/net/netfilter/nf_conntrack_proto_generic.c
+++ b/net/netfilter/nf_conntrack_proto_generic.c
@@ -67,6 +67,7 @@ void nf_conntrack_generic_init_net(struct net *net)
const struct nf_conntrack_l4proto nf_conntrack_l4proto_generic =
{
.l4proto = 255,
+ .allow_clash = true,
#ifdef CONFIG_NF_CONNTRACK_TIMEOUT
.ctnl_timeout = {
.nlattr_to_obj = generic_timeout_nlattr_to_obj,
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index 3bf88c137868a..8dccd3598166b 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -2645,6 +2645,7 @@ static int nf_tables_addchain(struct nft_ctx *ctx, u8 family, u8 genmask,

err_register_hook:
nft_chain_del(chain);
+ synchronize_rcu();
err_chain_add:
nft_trans_destroy(trans);
err_trans:
@@ -7322,6 +7323,11 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
* and an existing one.
*/
err = -EEXIST;
+ } else if (err == -ECANCELED) {
+ /* ECANCELED reports an existing nul-element in
+ * interval sets.
+ */
+ err = 0;
}
goto err_element_clash;
}
@@ -11071,6 +11077,13 @@ static int nf_tables_abort(struct net *net, struct sk_buff *skb,
ret = __nf_tables_abort(net, action);
nft_gc_seq_end(nft_net, gc_seq);

+ if (action == NFNL_ABORT_NONE) {
+ struct nft_table *table;
+
+ list_for_each_entry(table, &nft_net->tables, list)
+ table->validate_state = NFT_VALIDATE_SKIP;
+ }
+
WARN_ON_ONCE(!list_empty(&nft_net->commit_list));

/* module autoload needs to happen after GC sequence update because it
diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
index d2773ce9b5853..af35dbc19864a 100644
--- a/net/netfilter/nfnetlink_queue.c
+++ b/net/netfilter/nfnetlink_queue.c
@@ -30,6 +30,8 @@
#include <linux/netfilter/nf_conntrack_common.h>
#include <linux/list.h>
#include <linux/cgroup-defs.h>
+#include <linux/rhashtable.h>
+#include <linux/jhash.h>
#include <net/gso.h>
#include <net/sock.h>
#include <net/tcp_states.h>
@@ -47,6 +49,8 @@
#endif

#define NFQNL_QMAX_DEFAULT 1024
+#define NFQNL_HASH_MIN 1024
+#define NFQNL_HASH_MAX 1048576

/* We're using struct nlattr which has 16bit nla_len. Note that nla_len
* includes the header length. Thus, the maximum packet length that we
@@ -56,6 +60,26 @@
*/
#define NFQNL_MAX_COPY_RANGE (0xffff - NLA_HDRLEN)

+/* Composite key for packet lookup: (net, queue_num, packet_id) */
+struct nfqnl_packet_key {
+ possible_net_t net;
+ u32 packet_id;
+ u16 queue_num;
+} __aligned(sizeof(u32)); /* jhash2 requires 32-bit alignment */
+
+/* Global rhashtable - one for entire system, all netns */
+static struct rhashtable nfqnl_packet_map __read_mostly;
+
+/* Helper to initialize composite key */
+static inline void nfqnl_init_key(struct nfqnl_packet_key *key,
+ struct net *net, u32 packet_id, u16 queue_num)
+{
+ memset(key, 0, sizeof(*key));
+ write_pnet(&key->net, net);
+ key->packet_id = packet_id;
+ key->queue_num = queue_num;
+}
+
struct nfqnl_instance {
struct hlist_node hlist; /* global list of queues */
struct rcu_head rcu;
@@ -100,6 +124,39 @@ static inline u_int8_t instance_hashfn(u_int16_t queue_num)
return ((queue_num >> 8) ^ queue_num) % INSTANCE_BUCKETS;
}

+/* Extract composite key from nf_queue_entry for hashing */
+static u32 nfqnl_packet_obj_hashfn(const void *data, u32 len, u32 seed)
+{
+ const struct nf_queue_entry *entry = data;
+ struct nfqnl_packet_key key;
+
+ nfqnl_init_key(&key, entry->state.net, entry->id, entry->queue_num);
+
+ return jhash2((u32 *)&key, sizeof(key) / sizeof(u32), seed);
+}
+
+/* Compare stack-allocated key against entry */
+static int nfqnl_packet_obj_cmpfn(struct rhashtable_compare_arg *arg,
+ const void *obj)
+{
+ const struct nfqnl_packet_key *key = arg->key;
+ const struct nf_queue_entry *entry = obj;
+
+ return !net_eq(entry->state.net, read_pnet(&key->net)) ||
+ entry->queue_num != key->queue_num ||
+ entry->id != key->packet_id;
+}
+
+static const struct rhashtable_params nfqnl_rhashtable_params = {
+ .head_offset = offsetof(struct nf_queue_entry, hash_node),
+ .key_len = sizeof(struct nfqnl_packet_key),
+ .obj_hashfn = nfqnl_packet_obj_hashfn,
+ .obj_cmpfn = nfqnl_packet_obj_cmpfn,
+ .automatic_shrinking = true,
+ .min_size = NFQNL_HASH_MIN,
+ .max_size = NFQNL_HASH_MAX,
+};
+
static struct nfqnl_instance *
instance_lookup(struct nfnl_queue_net *q, u_int16_t queue_num)
{
@@ -191,33 +248,45 @@ instance_destroy(struct nfnl_queue_net *q, struct nfqnl_instance *inst)
spin_unlock(&q->instances_lock);
}

-static inline void
+static int
__enqueue_entry(struct nfqnl_instance *queue, struct nf_queue_entry *entry)
{
- list_add_tail(&entry->list, &queue->queue_list);
- queue->queue_total++;
+ int err;
+
+ entry->queue_num = queue->queue_num;
+
+ err = rhashtable_insert_fast(&nfqnl_packet_map, &entry->hash_node,
+ nfqnl_rhashtable_params);
+ if (unlikely(err))
+ return err;
+
+ list_add_tail(&entry->list, &queue->queue_list);
+ queue->queue_total++;
+
+ return 0;
}

static void
__dequeue_entry(struct nfqnl_instance *queue, struct nf_queue_entry *entry)
{
+ rhashtable_remove_fast(&nfqnl_packet_map, &entry->hash_node,
+ nfqnl_rhashtable_params);
list_del(&entry->list);
queue->queue_total--;
}

static struct nf_queue_entry *
-find_dequeue_entry(struct nfqnl_instance *queue, unsigned int id)
+find_dequeue_entry(struct nfqnl_instance *queue, unsigned int id,
+ struct net *net)
{
- struct nf_queue_entry *entry = NULL, *i;
+ struct nfqnl_packet_key key;
+ struct nf_queue_entry *entry;

- spin_lock_bh(&queue->lock);
+ nfqnl_init_key(&key, net, id, queue->queue_num);

- list_for_each_entry(i, &queue->queue_list, list) {
- if (i->id == id) {
- entry = i;
- break;
- }
- }
+ spin_lock_bh(&queue->lock);
+ entry = rhashtable_lookup_fast(&nfqnl_packet_map, &key,
+ nfqnl_rhashtable_params);

if (entry)
__dequeue_entry(queue, entry);
@@ -369,6 +438,34 @@ static void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict)
nf_queue_entry_free(entry);
}

+/* return true if the entry has an unconfirmed conntrack attached that isn't owned by us
+ * exclusively.
+ */
+static bool nf_ct_drop_unconfirmed(const struct nf_queue_entry *entry, bool *is_unconfirmed)
+{
+#if IS_ENABLED(CONFIG_NF_CONNTRACK)
+ struct nf_conn *ct = (void *)skb_nfct(entry->skb);
+
+ if (!ct || nf_ct_is_confirmed(ct))
+ return false;
+
+ if (is_unconfirmed)
+ *is_unconfirmed = true;
+
+ /* in some cases skb_clone() can occur after initial conntrack
+ * pickup, but conntrack assumes exclusive skb->_nfct ownership for
+ * unconfirmed entries.
+ *
+ * This happens for br_netfilter and with ip multicast routing.
+ * This can't be solved with serialization here because one clone
+ * could have been queued for local delivery or could be transmitted
+ * in parallel on another CPU.
+ */
+ return refcount_read(&ct->ct_general.use) > 1;
+#endif
+ return false;
+}
+
static void nfqnl_reinject(struct nf_queue_entry *entry, unsigned int verdict)
{
const struct nf_ct_hook *ct_hook;
@@ -396,6 +493,24 @@ static void nfqnl_reinject(struct nf_queue_entry *entry, unsigned int verdict)
break;
}
}
+
+ if (verdict != NF_DROP && entry->nf_ct_is_unconfirmed) {
+ /* If first queued segment was already reinjected then
+ * there is a good chance the ct entry is now confirmed.
+ *
+ * Handle the rare cases:
+ * - out-of-order verdict
+ * - threaded userspace reinjecting in parallel
+ * - first segment was dropped
+ *
+ * In all of those cases we can't handle this packet
+ * because we can't be sure that another CPU won't modify
+ * nf_conn->ext in parallel which isn't allowed.
+ */
+ if (nf_ct_drop_unconfirmed(entry, NULL))
+ verdict = NF_DROP;
+ }
+
nf_reinject(entry, verdict);
}

@@ -407,8 +522,7 @@ nfqnl_flush(struct nfqnl_instance *queue, nfqnl_cmpfn cmpfn, unsigned long data)
spin_lock_bh(&queue->lock);
list_for_each_entry_safe(entry, next, &queue->queue_list, list) {
if (!cmpfn || cmpfn(entry, data)) {
- list_del(&entry->list);
- queue->queue_total--;
+ __dequeue_entry(queue, entry);
nfqnl_reinject(entry, NF_DROP);
}
}
@@ -824,49 +938,6 @@ nfqnl_build_packet_message(struct net *net, struct nfqnl_instance *queue,
return NULL;
}

-static bool nf_ct_drop_unconfirmed(const struct nf_queue_entry *entry)
-{
-#if IS_ENABLED(CONFIG_NF_CONNTRACK)
- static const unsigned long flags = IPS_CONFIRMED | IPS_DYING;
- struct nf_conn *ct = (void *)skb_nfct(entry->skb);
- unsigned long status;
- unsigned int use;
-
- if (!ct)
- return false;
-
- status = READ_ONCE(ct->status);
- if ((status & flags) == IPS_DYING)
- return true;
-
- if (status & IPS_CONFIRMED)
- return false;
-
- /* in some cases skb_clone() can occur after initial conntrack
- * pickup, but conntrack assumes exclusive skb->_nfct ownership for
- * unconfirmed entries.
- *
- * This happens for br_netfilter and with ip multicast routing.
- * We can't be solved with serialization here because one clone could
- * have been queued for local delivery.
- */
- use = refcount_read(&ct->ct_general.use);
- if (likely(use == 1))
- return false;
-
- /* Can't decrement further? Exclusive ownership. */
- if (!refcount_dec_not_one(&ct->ct_general.use))
- return false;
-
- skb_set_nfct(entry->skb, 0);
- /* No nf_ct_put(): we already decremented .use and it cannot
- * drop down to 0.
- */
- return true;
-#endif
- return false;
-}
-
static int
__nfqnl_enqueue_packet(struct net *net, struct nfqnl_instance *queue,
struct nf_queue_entry *entry)
@@ -883,26 +954,23 @@ __nfqnl_enqueue_packet(struct net *net, struct nfqnl_instance *queue,
}
spin_lock_bh(&queue->lock);

- if (nf_ct_drop_unconfirmed(entry))
- goto err_out_free_nskb;
+ if (queue->queue_total >= queue->queue_maxlen)
+ goto err_out_queue_drop;

- if (queue->queue_total >= queue->queue_maxlen) {
- if (queue->flags & NFQA_CFG_F_FAIL_OPEN) {
- failopen = 1;
- err = 0;
- } else {
- queue->queue_dropped++;
- net_warn_ratelimited("nf_queue: full at %d entries, dropping packets(s)\n",
- queue->queue_total);
- }
- goto err_out_free_nskb;
- }
entry->id = ++queue->id_sequence;
*packet_id_ptr = htonl(entry->id);

+ /* Insert into hash BEFORE unicast. If failure don't send to userspace. */
+ err = __enqueue_entry(queue, entry);
+ if (unlikely(err))
+ goto err_out_queue_drop;
+
/* nfnetlink_unicast will either free the nskb or add it to a socket */
err = nfnetlink_unicast(nskb, net, queue->peer_portid);
if (err < 0) {
+ /* Unicast failed - remove entry we just inserted */
+ __dequeue_entry(queue, entry);
+
if (queue->flags & NFQA_CFG_F_FAIL_OPEN) {
failopen = 1;
err = 0;
@@ -912,12 +980,22 @@ __nfqnl_enqueue_packet(struct net *net, struct nfqnl_instance *queue,
goto err_out_unlock;
}

- __enqueue_entry(queue, entry);
-
spin_unlock_bh(&queue->lock);
return 0;

-err_out_free_nskb:
+err_out_queue_drop:
+ if (queue->flags & NFQA_CFG_F_FAIL_OPEN) {
+ failopen = 1;
+ err = 0;
+ } else {
+ queue->queue_dropped++;
+
+ if (queue->queue_total >= queue->queue_maxlen)
+ net_warn_ratelimited("nf_queue: full at %d entries, dropping packets(s)\n",
+ queue->queue_total);
+ else
+ net_warn_ratelimited("nf_queue: hash insert failed: %d\n", err);
+ }
kfree_skb(nskb);
err_out_unlock:
spin_unlock_bh(&queue->lock);
@@ -996,9 +1074,10 @@ __nfqnl_enqueue_packet_gso(struct net *net, struct nfqnl_instance *queue,
static int
nfqnl_enqueue_packet(struct nf_queue_entry *entry, unsigned int queuenum)
{
- unsigned int queued;
- struct nfqnl_instance *queue;
struct sk_buff *skb, *segs, *nskb;
+ bool ct_is_unconfirmed = false;
+ struct nfqnl_instance *queue;
+ unsigned int queued;
int err = -ENOBUFS;
struct net *net = entry->state.net;
struct nfnl_queue_net *q = nfnl_queue_pernet(net);
@@ -1022,6 +1101,15 @@ nfqnl_enqueue_packet(struct nf_queue_entry *entry, unsigned int queuenum)
break;
}

+ /* Check if someone already holds another reference to
+ * unconfirmed ct. If so, we cannot queue the skb:
+ * concurrent modifications of nf_conn->ext are not
+ * allowed and we can't know if another CPU isn't
+ * processing the same nf_conn entry in parallel.
+ */
+ if (nf_ct_drop_unconfirmed(entry, &ct_is_unconfirmed))
+ return -EINVAL;
+
if (!skb_is_gso(skb) || ((queue->flags & NFQA_CFG_F_GSO) && !skb_is_gso_sctp(skb)))
return __nfqnl_enqueue_packet(net, queue, entry);

@@ -1035,7 +1123,23 @@ nfqnl_enqueue_packet(struct nf_queue_entry *entry, unsigned int queuenum)
goto out_err;
queued = 0;
err = 0;
+
skb_list_walk_safe(segs, segs, nskb) {
+ if (ct_is_unconfirmed && queued > 0) {
+ /* skb_gso_segment() increments the ct refcount.
+ * This is a problem for unconfirmed (not in hash)
+ * entries, those can race when reinjections happen
+ * in parallel.
+ *
+ * Annotate this for all queued entries except the
+ * first one.
+ *
+ * As long as the first one is reinjected first it
+ * will do the confirmation for us.
+ */
+ entry->nf_ct_is_unconfirmed = ct_is_unconfirmed;
+ }
+
if (err == 0)
err = __nfqnl_enqueue_packet_gso(net, queue,
segs, entry);
@@ -1428,7 +1532,7 @@ static int nfqnl_recv_verdict(struct sk_buff *skb, const struct nfnl_info *info,

verdict = ntohl(vhdr->verdict);

- entry = find_dequeue_entry(queue, ntohl(vhdr->id));
+ entry = find_dequeue_entry(queue, ntohl(vhdr->id), info->net);
if (entry == NULL)
return -ENOENT;

@@ -1779,10 +1883,14 @@ static int __init nfnetlink_queue_init(void)
{
int status;

+ status = rhashtable_init(&nfqnl_packet_map, &nfqnl_rhashtable_params);
+ if (status < 0)
+ return status;
+
status = register_pernet_subsys(&nfnl_queue_net_ops);
if (status < 0) {
pr_err("failed to register pernet ops\n");
- goto out;
+ goto cleanup_rhashtable;
}

netlink_register_notifier(&nfqnl_rtnl_notifier);
@@ -1807,7 +1915,8 @@ static int __init nfnetlink_queue_init(void)
cleanup_netlink_notifier:
netlink_unregister_notifier(&nfqnl_rtnl_notifier);
unregister_pernet_subsys(&nfnl_queue_net_ops);
-out:
+cleanup_rhashtable:
+ rhashtable_destroy(&nfqnl_packet_map);
return status;
}

@@ -1819,6 +1928,8 @@ static void __exit nfnetlink_queue_fini(void)
netlink_unregister_notifier(&nfqnl_rtnl_notifier);
unregister_pernet_subsys(&nfnl_queue_net_ops);

+ rhashtable_destroy(&nfqnl_packet_map);
+
rcu_barrier(); /* Wait for completion of call_rcu()'s */
}

diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
index 72711d62fddfa..08f620311b03f 100644
--- a/net/netfilter/nft_compat.c
+++ b/net/netfilter/nft_compat.c
@@ -134,7 +134,8 @@ static void nft_target_eval_bridge(const struct nft_expr *expr,
}

static const struct nla_policy nft_target_policy[NFTA_TARGET_MAX + 1] = {
- [NFTA_TARGET_NAME] = { .type = NLA_NUL_STRING },
+ [NFTA_TARGET_NAME] = { .type = NLA_NUL_STRING,
+ .len = XT_EXTENSION_MAXNAMELEN, },
[NFTA_TARGET_REV] = NLA_POLICY_MAX(NLA_BE32, 255),
[NFTA_TARGET_INFO] = { .type = NLA_BINARY },
};
@@ -434,7 +435,8 @@ static void nft_match_eval(const struct nft_expr *expr,
}

static const struct nla_policy nft_match_policy[NFTA_MATCH_MAX + 1] = {
- [NFTA_MATCH_NAME] = { .type = NLA_NUL_STRING },
+ [NFTA_MATCH_NAME] = { .type = NLA_NUL_STRING,
+ .len = XT_EXTENSION_MAXNAMELEN },
[NFTA_MATCH_REV] = NLA_POLICY_MAX(NLA_BE32, 255),
[NFTA_MATCH_INFO] = { .type = NLA_BINARY },
};
@@ -693,7 +695,12 @@ static int nfnl_compat_get_rcu(struct sk_buff *skb,

name = nla_data(tb[NFTA_COMPAT_NAME]);
rev = ntohl(nla_get_be32(tb[NFTA_COMPAT_REV]));
- target = ntohl(nla_get_be32(tb[NFTA_COMPAT_TYPE]));
+ /* x_tables api checks for 'target == 1' to mean target,
+ * everything else means 'match'.
+ * In x_tables world, the number is set by kernel, not
+ * userspace.
+ */
+ target = nla_get_be32(tb[NFTA_COMPAT_TYPE]) == htonl(1);

switch(family) {
case AF_INET:
diff --git a/net/netfilter/nft_connlimit.c b/net/netfilter/nft_connlimit.c
index 83a7d5769396c..5dd50b3ab5a45 100644
--- a/net/netfilter/nft_connlimit.c
+++ b/net/netfilter/nft_connlimit.c
@@ -232,13 +232,8 @@ static void nft_connlimit_destroy_clone(const struct nft_ctx *ctx,
static bool nft_connlimit_gc(struct net *net, const struct nft_expr *expr)
{
struct nft_connlimit *priv = nft_expr_priv(expr);
- bool ret;

- local_bh_disable();
- ret = nf_conncount_gc_list(net, priv->list);
- local_bh_enable();
-
- return ret;
+ return nf_conncount_gc_list(net, priv->list);
}

static struct nft_expr_type nft_connlimit_type;
diff --git a/net/netfilter/nft_counter.c b/net/netfilter/nft_counter.c
index cc73253294963..0d70325280cc5 100644
--- a/net/netfilter/nft_counter.c
+++ b/net/netfilter/nft_counter.c
@@ -117,8 +117,8 @@ static void nft_counter_reset(struct nft_counter_percpu_priv *priv,
nft_sync = this_cpu_ptr(&nft_counter_sync);

u64_stats_update_begin(nft_sync);
- u64_stats_add(&this_cpu->packets, -total->packets);
- u64_stats_add(&this_cpu->bytes, -total->bytes);
+ u64_stats_sub(&this_cpu->packets, total->packets);
+ u64_stats_sub(&this_cpu->bytes, total->bytes);
u64_stats_update_end(nft_sync);

local_bh_enable();
diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
index 900eddb93dcc8..e87398cefef00 100644
--- a/net/netfilter/nft_set_hash.c
+++ b/net/netfilter/nft_set_hash.c
@@ -527,15 +527,20 @@ static struct nft_elem_priv *
nft_hash_get(const struct net *net, const struct nft_set *set,
const struct nft_set_elem *elem, unsigned int flags)
{
+ const u32 *key = (const u32 *)&elem->key.val;
struct nft_hash *priv = nft_set_priv(set);
u8 genmask = nft_genmask_cur(net);
struct nft_hash_elem *he;
u32 hash;

- hash = jhash(elem->key.val.data, set->klen, priv->seed);
+ if (set->klen == 4)
+ hash = jhash_1word(*key, priv->seed);
+ else
+ hash = jhash(key, set->klen, priv->seed);
+
hash = reciprocal_scale(hash, priv->buckets);
hlist_for_each_entry_rcu(he, &priv->table[hash], node) {
- if (!memcmp(nft_set_ext_key(&he->ext), elem->key.val.data, set->klen) &&
+ if (!memcmp(nft_set_ext_key(&he->ext), key, set->klen) &&
nft_set_elem_active(&he->ext, genmask))
return &he->priv;
}
diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
index b1f04168ec937..48ad51a448e7b 100644
--- a/net/netfilter/nft_set_rbtree.c
+++ b/net/netfilter/nft_set_rbtree.c
@@ -39,6 +39,13 @@ static bool nft_rbtree_interval_start(const struct nft_rbtree_elem *rbe)
return !nft_rbtree_interval_end(rbe);
}

+static bool nft_rbtree_interval_null(const struct nft_set *set,
+ const struct nft_rbtree_elem *rbe)
+{
+ return (!memchr_inv(nft_set_ext_key(&rbe->ext), 0, set->klen) &&
+ nft_rbtree_interval_end(rbe));
+}
+
static int nft_rbtree_cmp(const struct nft_set *set,
const struct nft_rbtree_elem *e1,
const struct nft_rbtree_elem *e2)
@@ -302,11 +309,23 @@ static bool nft_rbtree_update_first(const struct nft_set *set,
return false;
}

+/* Only for anonymous sets which do not allow updates, all element are active. */
+static struct nft_rbtree_elem *nft_rbtree_prev_active(struct nft_rbtree_elem *rbe)
+{
+ struct rb_node *node;
+
+ node = rb_prev(&rbe->node);
+ if (!node)
+ return NULL;
+
+ return rb_entry(node, struct nft_rbtree_elem, node);
+}
+
static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
struct nft_rbtree_elem *new,
struct nft_elem_priv **elem_priv)
{
- struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL;
+ struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL, *rbe_prev;
struct rb_node *node, *next, *parent, **p, *first = NULL;
struct nft_rbtree *priv = nft_set_priv(set);
u8 cur_genmask = nft_genmask_cur(net);
@@ -431,6 +450,12 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
*/
if (rbe_le && !nft_rbtree_cmp(set, new, rbe_le) &&
nft_rbtree_interval_end(rbe_le) == nft_rbtree_interval_end(new)) {
+ /* - ignore null interval, otherwise NLM_F_CREATE bogusly
+ * reports EEXIST.
+ */
+ if (nft_rbtree_interval_null(set, new))
+ return -ECANCELED;
+
*elem_priv = &rbe_le->priv;
return -EEXIST;
}
@@ -438,11 +463,19 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
/* - new start element with existing closest, less or equal key value
* being a start element: partial overlap, reported as -ENOTEMPTY.
* Anonymous sets allow for two consecutive start element since they
- * are constant, skip them to avoid bogus overlap reports.
+ * are constant, but validate that this new start element does not
+ * sit in between an existing start and end elements: partial overlap,
+ * reported as -ENOTEMPTY.
*/
- if (!nft_set_is_anonymous(set) && rbe_le &&
- nft_rbtree_interval_start(rbe_le) && nft_rbtree_interval_start(new))
- return -ENOTEMPTY;
+ if (rbe_le &&
+ nft_rbtree_interval_start(rbe_le) && nft_rbtree_interval_start(new)) {
+ if (!nft_set_is_anonymous(set))
+ return -ENOTEMPTY;
+
+ rbe_prev = nft_rbtree_prev_active(rbe_le);
+ if (rbe_prev && nft_rbtree_interval_end(rbe_prev))
+ return -ENOTEMPTY;
+ }

/* - new end element with existing closest, less or equal key value
* being a end element: partial overlap, reported as -ENOTEMPTY.
diff --git a/net/netfilter/xt_tcpmss.c b/net/netfilter/xt_tcpmss.c
index 37704ab017992..0d32d4841cb32 100644
--- a/net/netfilter/xt_tcpmss.c
+++ b/net/netfilter/xt_tcpmss.c
@@ -61,7 +61,7 @@ tcpmss_mt(const struct sk_buff *skb, struct xt_action_param *par)
return (mssval >= info->mss_min &&
mssval <= info->mss_max) ^ info->invert;
}
- if (op[i] < 2)
+ if (op[i] < 2 || i == optlen - 1)
i++;
else
i += op[i+1] ? : 1;
diff --git a/net/nfc/hci/llc_shdlc.c b/net/nfc/hci/llc_shdlc.c
index e90f70385813a..a106f4352356d 100644
--- a/net/nfc/hci/llc_shdlc.c
+++ b/net/nfc/hci/llc_shdlc.c
@@ -762,6 +762,14 @@ static void llc_shdlc_deinit(struct nfc_llc *llc)
{
struct llc_shdlc *shdlc = nfc_llc_get_data(llc);

+ timer_shutdown_sync(&shdlc->connect_timer);
+ timer_shutdown_sync(&shdlc->t1_timer);
+ timer_shutdown_sync(&shdlc->t2_timer);
+ shdlc->t1_active = false;
+ shdlc->t2_active = false;
+
+ cancel_work_sync(&shdlc->sm_work);
+
skb_queue_purge(&shdlc->rcv_q);
skb_queue_purge(&shdlc->send_q);
skb_queue_purge(&shdlc->ack_pending_q);
diff --git a/net/nfc/nci/ntf.c b/net/nfc/nci/ntf.c
index cb2a672105dc1..df0f1200082cf 100644
--- a/net/nfc/nci/ntf.c
+++ b/net/nfc/nci/ntf.c
@@ -58,7 +58,7 @@ static int nci_core_conn_credits_ntf_packet(struct nci_dev *ndev,
struct nci_conn_info *conn_info;
int i;

- if (skb->len < sizeof(struct nci_core_conn_credit_ntf))
+ if (skb->len < offsetofend(struct nci_core_conn_credit_ntf, num_entries))
return -EINVAL;

ntf = (struct nci_core_conn_credit_ntf *)skb->data;
@@ -68,6 +68,10 @@ static int nci_core_conn_credits_ntf_packet(struct nci_dev *ndev,
if (ntf->num_entries > NCI_MAX_NUM_CONN)
ntf->num_entries = NCI_MAX_NUM_CONN;

+ if (skb->len < offsetofend(struct nci_core_conn_credit_ntf, num_entries) +
+ ntf->num_entries * sizeof(struct conn_credit_entry))
+ return -EINVAL;
+
/* update the credits */
for (i = 0; i < ntf->num_entries; i++) {
ntf->conn_entries[i].conn_id =
@@ -138,23 +142,48 @@ static int nci_core_conn_intf_error_ntf_packet(struct nci_dev *ndev,
static const __u8 *
nci_extract_rf_params_nfca_passive_poll(struct nci_dev *ndev,
struct rf_tech_specific_params_nfca_poll *nfca_poll,
- const __u8 *data)
+ const __u8 *data, ssize_t data_len)
{
+ /* Check if we have enough data for sens_res (2 bytes) */
+ if (data_len < 2)
+ return ERR_PTR(-EINVAL);
+
nfca_poll->sens_res = __le16_to_cpu(*((__le16 *)data));
data += 2;
+ data_len -= 2;
+
+ /* Check if we have enough data for nfcid1_len (1 byte) */
+ if (data_len < 1)
+ return ERR_PTR(-EINVAL);

nfca_poll->nfcid1_len = min_t(__u8, *data++, NFC_NFCID1_MAXSIZE);
+ data_len--;

pr_debug("sens_res 0x%x, nfcid1_len %d\n",
nfca_poll->sens_res, nfca_poll->nfcid1_len);

+ /* Check if we have enough data for nfcid1 */
+ if (data_len < nfca_poll->nfcid1_len)
+ return ERR_PTR(-EINVAL);
+
memcpy(nfca_poll->nfcid1, data, nfca_poll->nfcid1_len);
data += nfca_poll->nfcid1_len;
+ data_len -= nfca_poll->nfcid1_len;
+
+ /* Check if we have enough data for sel_res_len (1 byte) */
+ if (data_len < 1)
+ return ERR_PTR(-EINVAL);

nfca_poll->sel_res_len = *data++;
+ data_len--;
+
+ if (nfca_poll->sel_res_len != 0) {
+ /* Check if we have enough data for sel_res (1 byte) */
+ if (data_len < 1)
+ return ERR_PTR(-EINVAL);

- if (nfca_poll->sel_res_len != 0)
nfca_poll->sel_res = *data++;
+ }

pr_debug("sel_res_len %d, sel_res 0x%x\n",
nfca_poll->sel_res_len,
@@ -166,12 +195,21 @@ nci_extract_rf_params_nfca_passive_poll(struct nci_dev *ndev,
static const __u8 *
nci_extract_rf_params_nfcb_passive_poll(struct nci_dev *ndev,
struct rf_tech_specific_params_nfcb_poll *nfcb_poll,
- const __u8 *data)
+ const __u8 *data, ssize_t data_len)
{
+ /* Check if we have enough data for sensb_res_len (1 byte) */
+ if (data_len < 1)
+ return ERR_PTR(-EINVAL);
+
nfcb_poll->sensb_res_len = min_t(__u8, *data++, NFC_SENSB_RES_MAXSIZE);
+ data_len--;

pr_debug("sensb_res_len %d\n", nfcb_poll->sensb_res_len);

+ /* Check if we have enough data for sensb_res */
+ if (data_len < nfcb_poll->sensb_res_len)
+ return ERR_PTR(-EINVAL);
+
memcpy(nfcb_poll->sensb_res, data, nfcb_poll->sensb_res_len);
data += nfcb_poll->sensb_res_len;

@@ -181,14 +219,29 @@ nci_extract_rf_params_nfcb_passive_poll(struct nci_dev *ndev,
static const __u8 *
nci_extract_rf_params_nfcf_passive_poll(struct nci_dev *ndev,
struct rf_tech_specific_params_nfcf_poll *nfcf_poll,
- const __u8 *data)
+ const __u8 *data, ssize_t data_len)
{
+ /* Check if we have enough data for bit_rate (1 byte) */
+ if (data_len < 1)
+ return ERR_PTR(-EINVAL);
+
nfcf_poll->bit_rate = *data++;
+ data_len--;
+
+ /* Check if we have enough data for sensf_res_len (1 byte) */
+ if (data_len < 1)
+ return ERR_PTR(-EINVAL);
+
nfcf_poll->sensf_res_len = min_t(__u8, *data++, NFC_SENSF_RES_MAXSIZE);
+ data_len--;

pr_debug("bit_rate %d, sensf_res_len %d\n",
nfcf_poll->bit_rate, nfcf_poll->sensf_res_len);

+ /* Check if we have enough data for sensf_res */
+ if (data_len < nfcf_poll->sensf_res_len)
+ return ERR_PTR(-EINVAL);
+
memcpy(nfcf_poll->sensf_res, data, nfcf_poll->sensf_res_len);
data += nfcf_poll->sensf_res_len;

@@ -198,22 +251,49 @@ nci_extract_rf_params_nfcf_passive_poll(struct nci_dev *ndev,
static const __u8 *
nci_extract_rf_params_nfcv_passive_poll(struct nci_dev *ndev,
struct rf_tech_specific_params_nfcv_poll *nfcv_poll,
- const __u8 *data)
+ const __u8 *data, ssize_t data_len)
{
+ /* Skip 1 byte (reserved) */
+ if (data_len < 1)
+ return ERR_PTR(-EINVAL);
+
++data;
+ data_len--;
+
+ /* Check if we have enough data for dsfid (1 byte) */
+ if (data_len < 1)
+ return ERR_PTR(-EINVAL);
+
nfcv_poll->dsfid = *data++;
+ data_len--;
+
+ /* Check if we have enough data for uid (8 bytes) */
+ if (data_len < NFC_ISO15693_UID_MAXSIZE)
+ return ERR_PTR(-EINVAL);
+
memcpy(nfcv_poll->uid, data, NFC_ISO15693_UID_MAXSIZE);
data += NFC_ISO15693_UID_MAXSIZE;
+
return data;
}

static const __u8 *
nci_extract_rf_params_nfcf_passive_listen(struct nci_dev *ndev,
struct rf_tech_specific_params_nfcf_listen *nfcf_listen,
- const __u8 *data)
+ const __u8 *data, ssize_t data_len)
{
+ /* Check if we have enough data for local_nfcid2_len (1 byte) */
+ if (data_len < 1)
+ return ERR_PTR(-EINVAL);
+
nfcf_listen->local_nfcid2_len = min_t(__u8, *data++,
NFC_NFCID2_MAXSIZE);
+ data_len--;
+
+ /* Check if we have enough data for local_nfcid2 */
+ if (data_len < nfcf_listen->local_nfcid2_len)
+ return ERR_PTR(-EINVAL);
+
memcpy(nfcf_listen->local_nfcid2, data, nfcf_listen->local_nfcid2_len);
data += nfcf_listen->local_nfcid2_len;

@@ -364,7 +444,7 @@ static int nci_rf_discover_ntf_packet(struct nci_dev *ndev,
const __u8 *data;
bool add_target = true;

- if (skb->len < sizeof(struct nci_rf_discover_ntf))
+ if (skb->len < offsetofend(struct nci_rf_discover_ntf, rf_tech_specific_params_len))
return -EINVAL;

data = skb->data;
@@ -380,26 +460,42 @@ static int nci_rf_discover_ntf_packet(struct nci_dev *ndev,
pr_debug("rf_tech_specific_params_len %d\n",
ntf.rf_tech_specific_params_len);

+ if (skb->len < (data - skb->data) +
+ ntf.rf_tech_specific_params_len + sizeof(ntf.ntf_type))
+ return -EINVAL;
+
if (ntf.rf_tech_specific_params_len > 0) {
switch (ntf.rf_tech_and_mode) {
case NCI_NFC_A_PASSIVE_POLL_MODE:
data = nci_extract_rf_params_nfca_passive_poll(ndev,
- &(ntf.rf_tech_specific_params.nfca_poll), data);
+ &(ntf.rf_tech_specific_params.nfca_poll), data,
+ ntf.rf_tech_specific_params_len);
+ if (IS_ERR(data))
+ return PTR_ERR(data);
break;

case NCI_NFC_B_PASSIVE_POLL_MODE:
data = nci_extract_rf_params_nfcb_passive_poll(ndev,
- &(ntf.rf_tech_specific_params.nfcb_poll), data);
+ &(ntf.rf_tech_specific_params.nfcb_poll), data,
+ ntf.rf_tech_specific_params_len);
+ if (IS_ERR(data))
+ return PTR_ERR(data);
break;

case NCI_NFC_F_PASSIVE_POLL_MODE:
data = nci_extract_rf_params_nfcf_passive_poll(ndev,
- &(ntf.rf_tech_specific_params.nfcf_poll), data);
+ &(ntf.rf_tech_specific_params.nfcf_poll), data,
+ ntf.rf_tech_specific_params_len);
+ if (IS_ERR(data))
+ return PTR_ERR(data);
break;

case NCI_NFC_V_PASSIVE_POLL_MODE:
data = nci_extract_rf_params_nfcv_passive_poll(ndev,
- &(ntf.rf_tech_specific_params.nfcv_poll), data);
+ &(ntf.rf_tech_specific_params.nfcv_poll), data,
+ ntf.rf_tech_specific_params_len);
+ if (IS_ERR(data))
+ return PTR_ERR(data);
break;

default:
@@ -574,7 +670,7 @@ static int nci_rf_intf_activated_ntf_packet(struct nci_dev *ndev,
const __u8 *data;
int err = NCI_STATUS_OK;

- if (skb->len < sizeof(struct nci_rf_intf_activated_ntf))
+ if (skb->len < offsetofend(struct nci_rf_intf_activated_ntf, rf_tech_specific_params_len))
return -EINVAL;

data = skb->data;
@@ -606,26 +702,41 @@ static int nci_rf_intf_activated_ntf_packet(struct nci_dev *ndev,
if (ntf.rf_interface == NCI_RF_INTERFACE_NFCEE_DIRECT)
goto listen;

+ if (skb->len < (data - skb->data) + ntf.rf_tech_specific_params_len)
+ return -EINVAL;
+
if (ntf.rf_tech_specific_params_len > 0) {
switch (ntf.activation_rf_tech_and_mode) {
case NCI_NFC_A_PASSIVE_POLL_MODE:
data = nci_extract_rf_params_nfca_passive_poll(ndev,
- &(ntf.rf_tech_specific_params.nfca_poll), data);
+ &(ntf.rf_tech_specific_params.nfca_poll), data,
+ ntf.rf_tech_specific_params_len);
+ if (IS_ERR(data))
+ return -EINVAL;
break;

case NCI_NFC_B_PASSIVE_POLL_MODE:
data = nci_extract_rf_params_nfcb_passive_poll(ndev,
- &(ntf.rf_tech_specific_params.nfcb_poll), data);
+ &(ntf.rf_tech_specific_params.nfcb_poll), data,
+ ntf.rf_tech_specific_params_len);
+ if (IS_ERR(data))
+ return -EINVAL;
break;

case NCI_NFC_F_PASSIVE_POLL_MODE:
data = nci_extract_rf_params_nfcf_passive_poll(ndev,
- &(ntf.rf_tech_specific_params.nfcf_poll), data);
+ &(ntf.rf_tech_specific_params.nfcf_poll), data,
+ ntf.rf_tech_specific_params_len);
+ if (IS_ERR(data))
+ return -EINVAL;
break;

case NCI_NFC_V_PASSIVE_POLL_MODE:
data = nci_extract_rf_params_nfcv_passive_poll(ndev,
- &(ntf.rf_tech_specific_params.nfcv_poll), data);
+ &(ntf.rf_tech_specific_params.nfcv_poll), data,
+ ntf.rf_tech_specific_params_len);
+ if (IS_ERR(data))
+ return -EINVAL;
break;

case NCI_NFC_A_PASSIVE_LISTEN_MODE:
@@ -635,7 +746,9 @@ static int nci_rf_intf_activated_ntf_packet(struct nci_dev *ndev,
case NCI_NFC_F_PASSIVE_LISTEN_MODE:
data = nci_extract_rf_params_nfcf_passive_listen(ndev,
&(ntf.rf_tech_specific_params.nfcf_listen),
- data);
+ data, ntf.rf_tech_specific_params_len);
+ if (IS_ERR(data))
+ return -EINVAL;
break;

default:
@@ -646,6 +759,13 @@ static int nci_rf_intf_activated_ntf_packet(struct nci_dev *ndev,
}
}

+ if (skb->len < (data - skb->data) +
+ sizeof(ntf.data_exch_rf_tech_and_mode) +
+ sizeof(ntf.data_exch_tx_bit_rate) +
+ sizeof(ntf.data_exch_rx_bit_rate) +
+ sizeof(ntf.activation_params_len))
+ return -EINVAL;
+
ntf.data_exch_rf_tech_and_mode = *data++;
ntf.data_exch_tx_bit_rate = *data++;
ntf.data_exch_rx_bit_rate = *data++;
@@ -657,6 +777,9 @@ static int nci_rf_intf_activated_ntf_packet(struct nci_dev *ndev,
pr_debug("data_exch_rx_bit_rate 0x%x\n", ntf.data_exch_rx_bit_rate);
pr_debug("activation_params_len %d\n", ntf.activation_params_len);

+ if (skb->len < (data - skb->data) + ntf.activation_params_len)
+ return -EINVAL;
+
if (ntf.activation_params_len > 0) {
switch (ntf.rf_interface) {
case NCI_RF_INTERFACE_ISO_DEP:
diff --git a/net/rds/connection.c b/net/rds/connection.c
index c749c5525b40f..3a1b548dcdcb2 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -381,6 +381,8 @@ void rds_conn_shutdown(struct rds_conn_path *cp)
if (!rds_conn_path_transition(cp, RDS_CONN_UP,
RDS_CONN_DISCONNECTING) &&
!rds_conn_path_transition(cp, RDS_CONN_ERROR,
+ RDS_CONN_DISCONNECTING) &&
+ !rds_conn_path_transition(cp, RDS_CONN_RESETTING,
RDS_CONN_DISCONNECTING)) {
rds_conn_path_error(cp,
"shutdown called in state %d\n",
@@ -426,6 +428,8 @@ void rds_conn_shutdown(struct rds_conn_path *cp)
* to the conn hash, so we never trigger a reconnect on this
* conn - the reconnect is always triggered by the active peer. */
cancel_delayed_work_sync(&cp->cp_conn_w);
+
+ clear_bit(RDS_RECONNECT_PENDING, &cp->cp_flags);
rcu_read_lock();
if (!hlist_unhashed(&conn->c_hash_node)) {
rcu_read_unlock();
diff --git a/net/rds/send.c b/net/rds/send.c
index 09a2801106549..4a24ee9c22d7c 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -1382,9 +1382,11 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
else
queue_delayed_work(rds_wq, &cpath->cp_send_w, 1);
rcu_read_unlock();
+
+ if (ret)
+ goto out;
}
- if (ret)
- goto out;
+
rds_message_put(rm);

for (ind = 0; ind < vct.indx; ind++)
diff --git a/net/rds/tcp_listen.c b/net/rds/tcp_listen.c
index d89bd8d0c3545..fced9e286f79f 100644
--- a/net/rds/tcp_listen.c
+++ b/net/rds/tcp_listen.c
@@ -59,9 +59,6 @@ void rds_tcp_keepalive(struct socket *sock)
* socket and force a reconneect from smaller -> larger ip addr. The reason
* we special case cp_index 0 is to allow the rds probe ping itself to itself
* get through efficiently.
- * Since reconnects are only initiated from the node with the numerically
- * smaller ip address, we recycle conns in RDS_CONN_ERROR on the passive side
- * by moving them to CONNECTING in this function.
*/
static
struct rds_tcp_connection *rds_tcp_accept_one_path(struct rds_connection *conn)
@@ -86,8 +83,6 @@ struct rds_tcp_connection *rds_tcp_accept_one_path(struct rds_connection *conn)
struct rds_conn_path *cp = &conn->c_path[i];

if (rds_conn_path_transition(cp, RDS_CONN_DOWN,
- RDS_CONN_CONNECTING) ||
- rds_conn_path_transition(cp, RDS_CONN_ERROR,
RDS_CONN_CONNECTING)) {
return cp->cp_transport_data;
}
diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
index 1f1d9ce3e968a..2680196b09d0d 100644
--- a/net/sched/act_skbedit.c
+++ b/net/sched/act_skbedit.c
@@ -128,7 +128,7 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
struct tcf_skbedit *d;
u32 flags = 0, *priority = NULL, *mark = NULL, *mask = NULL;
u16 *queue_mapping = NULL, *ptype = NULL;
- u16 mapping_mod = 1;
+ u32 mapping_mod = 1;
bool exists = false;
int ret = 0, err;
u32 index;
@@ -196,6 +196,10 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
}

mapping_mod = *queue_mapping_max - *queue_mapping + 1;
+ if (mapping_mod > U16_MAX) {
+ NL_SET_ERR_MSG_MOD(extack, "The range of queue_mapping is invalid.");
+ return -EINVAL;
+ }
flags |= SKBEDIT_F_TXQ_SKBHASH;
}
if (*pure_flags & SKBEDIT_F_INHERITDSFIELD)
diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
index 369310909fc98..785d53a612426 100644
--- a/net/sunrpc/auth_gss/auth_gss.c
+++ b/net/sunrpc/auth_gss/auth_gss.c
@@ -39,6 +39,8 @@ static const struct rpc_authops authgss_ops;
static const struct rpc_credops gss_credops;
static const struct rpc_credops gss_nullops;

+static void gss_free_callback(struct kref *kref);
+
#define GSS_RETRY_EXPIRED 5
static unsigned int gss_expired_cred_retry_delay = GSS_RETRY_EXPIRED;

@@ -551,6 +553,7 @@ gss_alloc_msg(struct gss_auth *gss_auth,
}
return gss_msg;
err_put_pipe_version:
+ kref_put(&gss_auth->kref, gss_free_callback);
put_pipe_version(gss_auth->net);
err_free_msg:
kfree(gss_msg);
diff --git a/net/sunrpc/auth_gss/gss_rpc_xdr.c b/net/sunrpc/auth_gss/gss_rpc_xdr.c
index cb32ab9a83952..ee91f4d641e6f 100644
--- a/net/sunrpc/auth_gss/gss_rpc_xdr.c
+++ b/net/sunrpc/auth_gss/gss_rpc_xdr.c
@@ -320,29 +320,47 @@ static int gssx_dec_status(struct xdr_stream *xdr,

/* status->minor_status */
p = xdr_inline_decode(xdr, 8);
- if (unlikely(p == NULL))
- return -ENOSPC;
+ if (unlikely(p == NULL)) {
+ err = -ENOSPC;
+ goto out_free_mech;
+ }
p = xdr_decode_hyper(p, &status->minor_status);

/* status->major_status_string */
err = gssx_dec_buffer(xdr, &status->major_status_string);
if (err)
- return err;
+ goto out_free_mech;

/* status->minor_status_string */
err = gssx_dec_buffer(xdr, &status->minor_status_string);
if (err)
- return err;
+ goto out_free_major_status_string;

/* status->server_ctx */
err = gssx_dec_buffer(xdr, &status->server_ctx);
if (err)
- return err;
+ goto out_free_minor_status_string;

/* we assume we have no options for now, so simply consume them */
/* status->options */
err = dummy_dec_opt_array(xdr, &status->options);
+ if (err)
+ goto out_free_server_ctx;

+ return 0;
+
+out_free_server_ctx:
+ kfree(status->server_ctx.data);
+ status->server_ctx.data = NULL;
+out_free_minor_status_string:
+ kfree(status->minor_status_string.data);
+ status->minor_status_string.data = NULL;
+out_free_major_status_string:
+ kfree(status->major_status_string.data);
+ status->major_status_string.data = NULL;
+out_free_mech:
+ kfree(status->mech.data);
+ status->mech.data = NULL;
return err;
}

@@ -505,28 +523,35 @@ static int gssx_dec_name(struct xdr_stream *xdr,
/* name->name_type */
err = gssx_dec_buffer(xdr, &dummy_netobj);
if (err)
- return err;
+ goto out_free_display_name;

/* name->exported_name */
err = gssx_dec_buffer(xdr, &dummy_netobj);
if (err)
- return err;
+ goto out_free_display_name;

/* name->exported_composite_name */
err = gssx_dec_buffer(xdr, &dummy_netobj);
if (err)
- return err;
+ goto out_free_display_name;

/* we assume we have no attributes for now, so simply consume them */
/* name->name_attributes */
err = dummy_dec_nameattr_array(xdr, &dummy_name_attr_array);
if (err)
- return err;
+ goto out_free_display_name;

/* we assume we have no options for now, so simply consume them */
/* name->extensions */
err = dummy_dec_opt_array(xdr, &dummy_option_array);
+ if (err)
+ goto out_free_display_name;

+ return 0;
+
+out_free_display_name:
+ kfree(name->display_name.data);
+ name->display_name.data = NULL;
return err;
}

@@ -649,32 +674,34 @@ static int gssx_dec_ctx(struct xdr_stream *xdr,
/* ctx->state */
err = gssx_dec_buffer(xdr, &ctx->state);
if (err)
- return err;
+ goto out_free_exported_context_token;

/* ctx->need_release */
err = gssx_dec_bool(xdr, &ctx->need_release);
if (err)
- return err;
+ goto out_free_state;

/* ctx->mech */
err = gssx_dec_buffer(xdr, &ctx->mech);
if (err)
- return err;
+ goto out_free_state;

/* ctx->src_name */
err = gssx_dec_name(xdr, &ctx->src_name);
if (err)
- return err;
+ goto out_free_mech;

/* ctx->targ_name */
err = gssx_dec_name(xdr, &ctx->targ_name);
if (err)
- return err;
+ goto out_free_src_name;

/* ctx->lifetime */
p = xdr_inline_decode(xdr, 8+8);
- if (unlikely(p == NULL))
- return -ENOSPC;
+ if (unlikely(p == NULL)) {
+ err = -ENOSPC;
+ goto out_free_targ_name;
+ }
p = xdr_decode_hyper(p, &ctx->lifetime);

/* ctx->ctx_flags */
@@ -683,17 +710,36 @@ static int gssx_dec_ctx(struct xdr_stream *xdr,
/* ctx->locally_initiated */
err = gssx_dec_bool(xdr, &ctx->locally_initiated);
if (err)
- return err;
+ goto out_free_targ_name;

/* ctx->open */
err = gssx_dec_bool(xdr, &ctx->open);
if (err)
- return err;
+ goto out_free_targ_name;

/* we assume we have no options for now, so simply consume them */
/* ctx->options */
err = dummy_dec_opt_array(xdr, &ctx->options);
+ if (err)
+ goto out_free_targ_name;
+
+ return 0;

+out_free_targ_name:
+ kfree(ctx->targ_name.display_name.data);
+ ctx->targ_name.display_name.data = NULL;
+out_free_src_name:
+ kfree(ctx->src_name.display_name.data);
+ ctx->src_name.display_name.data = NULL;
+out_free_mech:
+ kfree(ctx->mech.data);
+ ctx->mech.data = NULL;
+out_free_state:
+ kfree(ctx->state.data);
+ ctx->state.data = NULL;
+out_free_exported_context_token:
+ kfree(ctx->exported_context_token.data);
+ ctx->exported_context_token.data = NULL;
return err;
}

diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
index 3d7f1413df023..12857381e8610 100644
--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
+++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
@@ -462,7 +462,10 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
newxprt->sc_max_bc_requests = 2;
}

- /* Arbitrary estimate of the needed number of rdma_rw contexts.
+ /* Estimate the needed number of rdma_rw contexts. The maximum
+ * Read and Write chunks have one segment each. Each request
+ * can involve one Read chunk and either a Write chunk or Reply
+ * chunk; thus a factor of three.
*/
maxpayload = min(xprt->xpt_server->sv_max_payload,
RPCSVC_MAXPAYLOAD_RDMA);
@@ -470,7 +473,8 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
rdma_rw_mr_factor(dev, newxprt->sc_port_num,
maxpayload >> PAGE_SHIFT);

- newxprt->sc_sq_depth = rq_depth + ctxts;
+ newxprt->sc_sq_depth = rq_depth +
+ rdma_rw_max_send_wr(dev, newxprt->sc_port_num, ctxts, 0);
if (newxprt->sc_sq_depth > dev->attrs.max_qp_wr)
newxprt->sc_sq_depth = dev->attrs.max_qp_wr;
atomic_set(&newxprt->sc_sq_avail, newxprt->sc_sq_depth);
diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
index 2721baf9fd2b3..d7736d9002715 100644
--- a/net/tipc/crypto.c
+++ b/net/tipc/crypto.c
@@ -460,7 +460,7 @@ static void tipc_aead_users_dec(struct tipc_aead __rcu *aead, int lim)
rcu_read_lock();
tmp = rcu_dereference(aead);
if (tmp)
- atomic_add_unless(&rcu_dereference(aead)->users, -1, lim);
+ atomic_add_unless(&tmp->users, -1, lim);
rcu_read_unlock();
}

diff --git a/net/tipc/name_table.c b/net/tipc/name_table.c
index d1180370fdf41..e6555254ddb85 100644
--- a/net/tipc/name_table.c
+++ b/net/tipc/name_table.c
@@ -348,7 +348,8 @@ static bool tipc_service_insert_publ(struct net *net,

/* Return if the publication already exists */
list_for_each_entry(_p, &sr->all_publ, all_publ) {
- if (_p->key == key && (!_p->sk.node || _p->sk.node == node)) {
+ if (_p->key == key && _p->sk.ref == p->sk.ref &&
+ (!_p->sk.node || _p->sk.node == node)) {
pr_debug("Failed to bind duplicate %u,%u,%u/%u:%u/%u\n",
p->sr.type, p->sr.lower, p->sr.upper,
node, p->sk.ref, key);
@@ -388,7 +389,8 @@ static struct publication *tipc_service_remove_publ(struct service_range *r,
u32 node = sk->node;

list_for_each_entry(p, &r->all_publ, all_publ) {
- if (p->key != key || (node && node != p->sk.node))
+ if (p->key != key || p->sk.ref != sk->ref ||
+ (node && node != p->sk.node))
continue;
list_del(&p->all_publ);
list_del(&p->local_publ);
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index 1ff0d01bdadf0..bf01155b1011a 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -2499,7 +2499,7 @@ void tls_sw_cancel_work_tx(struct tls_context *tls_ctx)

set_bit(BIT_TX_CLOSING, &ctx->tx_bitmask);
set_bit(BIT_TX_SCHEDULED, &ctx->tx_bitmask);
- cancel_delayed_work_sync(&ctx->tx_work.work);
+ disable_delayed_work_sync(&ctx->tx_work.work);
}

void tls_sw_release_resources_tx(struct sock *sk)
diff --git a/net/vmw_vsock/vmci_transport.c b/net/vmw_vsock/vmci_transport.c
index 7eccd6708d664..aca3132689cf1 100644
--- a/net/vmw_vsock/vmci_transport.c
+++ b/net/vmw_vsock/vmci_transport.c
@@ -161,7 +161,7 @@ vmci_transport_packet_init(struct vmci_transport_packet *pkt,

case VMCI_TRANSPORT_PACKET_TYPE_WAITING_READ:
case VMCI_TRANSPORT_PACKET_TYPE_WAITING_WRITE:
- memcpy(&pkt->u.wait, wait, sizeof(pkt->u.wait));
+ pkt->u.wait = *wait;
break;

case VMCI_TRANSPORT_PACKET_TYPE_REQUEST2:
diff --git a/net/wireless/core.c b/net/wireless/core.c
index 6bb8a7037d24d..73c051f129d33 100644
--- a/net/wireless/core.c
+++ b/net/wireless/core.c
@@ -665,12 +665,8 @@ int wiphy_verify_iface_combinations(struct wiphy *wiphy,
c->limits[j].max > 1))
return -EINVAL;

- /* Only a single NAN can be allowed, avoid this
- * check for multi-radio global combination, since it
- * hold the capabilities of all radio combinations.
- */
- if (!combined_radio &&
- WARN_ON(types & BIT(NL80211_IFTYPE_NAN) &&
+ /* Only a single NAN can be allowed */
+ if (WARN_ON(types & BIT(NL80211_IFTYPE_NAN) &&
c->limits[j].max > 1))
return -EINVAL;

@@ -1378,8 +1374,10 @@ void cfg80211_leave(struct cfg80211_registered_device *rdev,
cfg80211_leave_ocb(rdev, dev);
break;
case NL80211_IFTYPE_P2P_DEVICE:
+ cfg80211_stop_p2p_device(rdev, wdev);
+ break;
case NL80211_IFTYPE_NAN:
- /* cannot happen, has no netdev */
+ cfg80211_stop_nan(rdev, wdev);
break;
case NL80211_IFTYPE_AP_VLAN:
case NL80211_IFTYPE_MONITOR:
diff --git a/net/wireless/scan.c b/net/wireless/scan.c
index f00ccc6d803be..f9aff1c58e800 100644
--- a/net/wireless/scan.c
+++ b/net/wireless/scan.c
@@ -1906,7 +1906,7 @@ cfg80211_update_known_bss(struct cfg80211_registered_device *rdev,
ether_addr_copy(known->parent_bssid, new->parent_bssid);
known->pub.max_bssid_indicator = new->pub.max_bssid_indicator;
known->pub.bssid_index = new->pub.bssid_index;
- known->pub.use_for &= new->pub.use_for;
+ known->pub.use_for = new->pub.use_for;
known->pub.cannot_use_reasons = new->pub.cannot_use_reasons;
known->bss_source = new->bss_source;

diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c
index 2371069f3c432..3af11dae0942a 100644
--- a/net/wireless/wext-compat.c
+++ b/net/wireless/wext-compat.c
@@ -714,7 +714,7 @@ static int cfg80211_wext_siwencodeext(struct net_device *dev,

idx = erq->flags & IW_ENCODE_INDEX;
if (cipher == WLAN_CIPHER_SUITE_AES_CMAC) {
- if (idx < 4 || idx > 5) {
+ if (idx < 5 || idx > 6) {
idx = wdev->wext.default_mgmt_key;
if (idx < 0)
return -EINVAL;
diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c
index fc7a603b04f13..a591df9253526 100644
--- a/net/xfrm/espintcp.c
+++ b/net/xfrm/espintcp.c
@@ -536,7 +536,7 @@ static void espintcp_close(struct sock *sk, long timeout)
sk->sk_prot = &tcp_prot;
barrier();

- cancel_work_sync(&ctx->work);
+ disable_work_sync(&ctx->work);
strp_done(&ctx->strp);

skb_queue_purge(&ctx->out_queue);
diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
index 32ad8f3fc81e8..7744ed8e4c83f 100644
--- a/net/xfrm/xfrm_device.c
+++ b/net/xfrm/xfrm_device.c
@@ -517,6 +517,14 @@ static int xfrm_dev_down(struct net_device *dev)
return NOTIFY_DONE;
}

+static int xfrm_dev_unregister(struct net_device *dev)
+{
+ xfrm_dev_state_flush(dev_net(dev), dev, true);
+ xfrm_dev_policy_flush(dev_net(dev), dev, true);
+
+ return NOTIFY_DONE;
+}
+
static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void *ptr)
{
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
@@ -529,8 +537,10 @@ static int xfrm_dev_event(struct notifier_block *this, unsigned long event, void
return xfrm_api_check(dev);

case NETDEV_DOWN:
- case NETDEV_UNREGISTER:
return xfrm_dev_down(dev);
+
+ case NETDEV_UNREGISTER:
+ return xfrm_dev_unregister(dev);
}
return NOTIFY_DONE;
}
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
index 2c42d83fbaa2d..aba06ff8694de 100644
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -3795,8 +3795,8 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
struct xfrm_tmpl *tp[XFRM_MAX_DEPTH];
struct xfrm_tmpl *stp[XFRM_MAX_DEPTH];
struct xfrm_tmpl **tpp = tp;
+ int i, k = 0;
int ti = 0;
- int i, k;

sp = skb_sec_path(skb);
if (!sp)
@@ -3822,6 +3822,12 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
tpp = stp;
}

+ if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET && sp == &dummy)
+ /* This policy template was already checked by HW
+ * and secpath was removed in __xfrm_policy_check2.
+ */
+ goto out;
+
/* For each tunnel xfrm, find the first matching tmpl.
* For each tmpl before that, find corresponding xfrm.
* Order is _important_. Later we will implement
@@ -3831,7 +3837,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
* verified to allow them to be skipped in future policy
* checks (e.g. nested tunnels).
*/
- for (i = xfrm_nr-1, k = 0; i >= 0; i--) {
+ for (i = xfrm_nr - 1; i >= 0; i--) {
k = xfrm_policy_ok(tpp[i], sp, k, family, if_id);
if (k < 0) {
if (k < -1)
@@ -3847,6 +3853,7 @@ int __xfrm_policy_check(struct sock *sk, int dir, struct sk_buff *skb,
goto reject;
}

+out:
xfrm_pols_put(pols, npols);
sp->verified_cnt = k;

diff --git a/rust/Makefile b/rust/Makefile
index c5571bab1bf5b..9c11b11a11d4f 100644
--- a/rust/Makefile
+++ b/rust/Makefile
@@ -388,6 +388,8 @@ quiet_cmd_rustc_procmacro = $(RUSTC_OR_CLIPPY_QUIET) P $@
$(obj)/libmacros.so: $(src)/macros/lib.rs FORCE
+$(call if_changed_dep,rustc_procmacro)

+# `rustc` requires `-Zunstable-options` to use custom target specifications
+# since Rust 1.95.0 (https://github.com/rust-lang/rust/pull/151534).
quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L $@
cmd_rustc_library = \
OBJTREE=$(abspath $(objtree)) \
@@ -398,6 +400,7 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
--crate-type rlib -L$(objtree)/$(obj) \
--crate-name $(patsubst %.o,%,$(notdir $@)) $< \
--sysroot=/dev/null \
+ -Zunstable-options \
$(if $(rustc_objcopy),;$(OBJCOPY) $(rustc_objcopy) $@) \
$(cmd_objtool)

diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
index 971eda0c6ba73..07e8a4eb71d04 100644
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -593,6 +593,10 @@ static int ignore_undef_symbol(struct elf_info *info, const char *symname)
/* Special register function linked on all modules during final link of .ko */
if (strstarts(symname, "_restgpr0_") ||
strstarts(symname, "_savegpr0_") ||
+ strstarts(symname, "_restgpr1_") ||
+ strstarts(symname, "_savegpr1_") ||
+ strstarts(symname, "_restfpr_") ||
+ strstarts(symname, "_savefpr_") ||
strstarts(symname, "_restvr_") ||
strstarts(symname, "_savevr_") ||
strcmp(symname, ".TOC.") == 0)
diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
index 01b923d97a446..584b40718ecb7 100644
--- a/security/apparmor/apparmorfs.c
+++ b/security/apparmor/apparmorfs.c
@@ -1631,6 +1631,15 @@ static const char *rawdata_get_link_base(struct dentry *dentry,

label = aa_get_label_rcu(&proxy->label);
profile = labels_profile(label);
+
+ /* rawdata can be null when aa_g_export_binary is unset during
+ * runtime and a profile is replaced
+ */
+ if (!profile->rawdata) {
+ aa_put_label(label);
+ return ERR_PTR(-ENOENT);
+ }
+
depth = profile_depth(profile);
target = gen_symlink_name(depth, profile->rawdata->name, name);
aa_put_label(label);
diff --git a/security/apparmor/include/match.h b/security/apparmor/include/match.h
index ae31a8a631fc6..dfc93631b43d8 100644
--- a/security/apparmor/include/match.h
+++ b/security/apparmor/include/match.h
@@ -102,16 +102,18 @@ struct aa_dfa {
struct table_header *tables[YYTD_ID_TSIZE];
};

-#define byte_to_byte(X) (X)
-
#define UNPACK_ARRAY(TABLE, BLOB, LEN, TTYPE, BTYPE, NTOHX) \
do { \
typeof(LEN) __i; \
TTYPE *__t = (TTYPE *) TABLE; \
BTYPE *__b = (BTYPE *) BLOB; \
- for (__i = 0; __i < LEN; __i++) { \
- __t[__i] = NTOHX(__b[__i]); \
- } \
+ BUILD_BUG_ON(sizeof(TTYPE) != sizeof(BTYPE)); \
+ if (IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)) \
+ memcpy(__t, __b, (LEN) * sizeof(BTYPE)); \
+ else /* copy & convert from big-endian */ \
+ for (__i = 0; __i < LEN; __i++) { \
+ __t[__i] = NTOHX(&__b[__i]); \
+ } \
} while (0)

static inline size_t table_size(size_t len, size_t el_size)
diff --git a/security/apparmor/label.c b/security/apparmor/label.c
index c71e4615dd460..af25ca6b6b83f 100644
--- a/security/apparmor/label.c
+++ b/security/apparmor/label.c
@@ -1290,7 +1290,7 @@ static inline aa_state_t match_component(struct aa_profile *profile,
* @request: permissions to request
* @perms: perms struct to set
*
- * Returns: 0 on success else ERROR
+ * Returns: state match stopped at or DFA_NOMATCH if aborted early
*
* For the label A//&B//&C this does the perm match for A//&B//&C
* @perms should be preinitialized with allperms OR a previous permission
@@ -1317,7 +1317,7 @@ static int label_compound_match(struct aa_profile *profile,

/* no component visible */
*perms = allperms;
- return 0;
+ return state;

next:
label_for_each_cont(i, label, tp) {
@@ -1329,15 +1329,11 @@ static int label_compound_match(struct aa_profile *profile,
goto fail;
}
*perms = *aa_lookup_perms(rules->policy, state);
- aa_apply_modes_to_perms(profile, perms);
- if ((perms->allow & request) != request)
- return -EACCES;
-
- return 0;
+ return state;

fail:
*perms = nullperms;
- return state;
+ return DFA_NOMATCH;
}

/**
@@ -1350,7 +1346,7 @@ static int label_compound_match(struct aa_profile *profile,
* @request: permissions to request
* @perms: an initialized perms struct to add accumulation to
*
- * Returns: 0 on success else ERROR
+ * Returns: the state the match finished in, may be the none matching state
*
* For the label A//&B//&C this does the perm match for each of A and B and C
* @perms should be preinitialized with allperms OR a previous permission
@@ -1378,11 +1374,10 @@ static int label_components_match(struct aa_profile *profile,
}

/* no subcomponents visible - no change in perms */
- return 0;
+ return state;

next:
tmp = *aa_lookup_perms(rules->policy, state);
- aa_apply_modes_to_perms(profile, &tmp);
aa_perms_accum(perms, &tmp);
label_for_each_cont(i, label, tp) {
if (!aa_ns_visible(profile->ns, tp->ns, subns))
@@ -1391,18 +1386,17 @@ static int label_components_match(struct aa_profile *profile,
if (!state)
goto fail;
tmp = *aa_lookup_perms(rules->policy, state);
- aa_apply_modes_to_perms(profile, &tmp);
aa_perms_accum(perms, &tmp);
}

if ((perms->allow & request) != request)
- return -EACCES;
+ return DFA_NOMATCH;

- return 0;
+ return state;

fail:
*perms = nullperms;
- return -EACCES;
+ return DFA_NOMATCH;
}

/**
@@ -1421,11 +1415,12 @@ int aa_label_match(struct aa_profile *profile, struct aa_ruleset *rules,
struct aa_label *label, aa_state_t state, bool subns,
u32 request, struct aa_perms *perms)
{
- int error = label_compound_match(profile, rules, label, state, subns,
- request, perms);
- if (!error)
- return error;
+ aa_state_t tmp = label_compound_match(profile, rules, label, state, subns,
+ request, perms);
+ if ((perms->allow & request) == request)
+ return tmp;

+ /* failed compound_match try component matches */
*perms = allperms;
return label_components_match(profile, rules, label, state, subns,
request, perms);
diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index 9a78fd36542d6..8855fb4b88342 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -1863,7 +1863,8 @@ char *aa_get_buffer(bool in_atomic)
if (!list_empty(&cache->head)) {
aa_buf = list_first_entry(&cache->head, union aa_buffer, list);
list_del(&aa_buf->list);
- cache->hold--;
+ if (cache->hold)
+ cache->hold--;
cache->count--;
put_cpu_ptr(&aa_local_buffers);
return &aa_buf->buffer[0];
diff --git a/security/apparmor/match.c b/security/apparmor/match.c
index 12e036f8ce0f7..fae26953619a7 100644
--- a/security/apparmor/match.c
+++ b/security/apparmor/match.c
@@ -15,6 +15,7 @@
#include <linux/vmalloc.h>
#include <linux/err.h>
#include <linux/kref.h>
+#include <linux/unaligned.h>

#include "include/lib.h"
#include "include/match.h"
@@ -42,11 +43,11 @@ static struct table_header *unpack_table(char *blob, size_t bsize)
/* loaded td_id's start at 1, subtract 1 now to avoid doing
* it every time we use td_id as an index
*/
- th.td_id = be16_to_cpu(*(__be16 *) (blob)) - 1;
+ th.td_id = get_unaligned_be16(blob) - 1;
if (th.td_id > YYTD_ID_MAX)
goto out;
- th.td_flags = be16_to_cpu(*(__be16 *) (blob + 2));
- th.td_lolen = be32_to_cpu(*(__be32 *) (blob + 8));
+ th.td_flags = get_unaligned_be16(blob + 2);
+ th.td_lolen = get_unaligned_be32(blob + 8);
blob += sizeof(struct table_header);

if (!(th.td_flags == YYTD_DATA16 || th.td_flags == YYTD_DATA32 ||
@@ -66,14 +67,13 @@ static struct table_header *unpack_table(char *blob, size_t bsize)
table->td_flags = th.td_flags;
table->td_lolen = th.td_lolen;
if (th.td_flags == YYTD_DATA8)
- UNPACK_ARRAY(table->td_data, blob, th.td_lolen,
- u8, u8, byte_to_byte);
+ memcpy(table->td_data, blob, th.td_lolen);
else if (th.td_flags == YYTD_DATA16)
UNPACK_ARRAY(table->td_data, blob, th.td_lolen,
- u16, __be16, be16_to_cpu);
+ u16, __be16, get_unaligned_be16);
else if (th.td_flags == YYTD_DATA32)
UNPACK_ARRAY(table->td_data, blob, th.td_lolen,
- u32, __be32, be32_to_cpu);
+ u32, __be32, get_unaligned_be32);
else
goto fail;
/* if table was vmalloced make sure the page tables are synced
@@ -277,14 +277,14 @@ struct aa_dfa *aa_dfa_unpack(void *blob, size_t size, int flags)
if (size < sizeof(struct table_set_header))
goto fail;

- if (ntohl(*(__be32 *) data) != YYTH_MAGIC)
+ if (get_unaligned_be32(data) != YYTH_MAGIC)
goto fail;

- hsize = ntohl(*(__be32 *) (data + 4));
+ hsize = get_unaligned_be32(data + 4);
if (size < hsize)
goto fail;

- dfa->flags = ntohs(*(__be16 *) (data + 12));
+ dfa->flags = get_unaligned_be16(data + 12);
if (dfa->flags & ~(YYTH_FLAGS))
goto fail;

@@ -293,7 +293,7 @@ struct aa_dfa *aa_dfa_unpack(void *blob, size_t size, int flags)
* if (dfa->flags & YYTH_FLAGS_OOB_TRANS) {
* if (hsize < 16 + 4)
* goto fail;
- * dfa->max_oob = ntol(*(__be32 *) (data + 16));
+ * dfa->max_oob = get_unaligned_be32(data + 16);
* if (dfa->max <= MAX_OOB_SUPPORTED) {
* pr_err("AppArmor DFA OOB greater than supported\n");
* goto fail;
diff --git a/security/apparmor/net.c b/security/apparmor/net.c
index 77413a5191179..f6f749191f601 100644
--- a/security/apparmor/net.c
+++ b/security/apparmor/net.c
@@ -190,8 +190,10 @@ int aa_sock_file_perm(const struct cred *subj_cred, struct aa_label *label,
const char *op, u32 request, struct socket *sock)
{
AA_BUG(!label);
- AA_BUG(!sock);
- AA_BUG(!sock->sk);
+
+ /* sock && sock->sk can be NULL for sockets being set up or torn down */
+ if (!sock || !sock->sk)
+ return 0;

return aa_label_sk_perm(subj_cred, label, op, request, sock->sk);
}
diff --git a/security/apparmor/policy_unpack.c b/security/apparmor/policy_unpack.c
index 3483c595f999f..8f7d6ff5aef60 100644
--- a/security/apparmor/policy_unpack.c
+++ b/security/apparmor/policy_unpack.c
@@ -683,8 +683,10 @@ static ssize_t unpack_perms_table(struct aa_ext *e, struct aa_perms **perms)
if (!aa_unpack_array(e, NULL, &size))
goto fail_reset;
*perms = kcalloc(size, sizeof(struct aa_perms), GFP_KERNEL);
- if (!*perms)
- goto fail_reset;
+ if (!*perms) {
+ e->pos = pos;
+ return -ENOMEM;
+ }
for (i = 0; i < size; i++) {
if (!unpack_perm(e, version, &(*perms)[i]))
goto fail;
diff --git a/security/apparmor/resource.c b/security/apparmor/resource.c
index dcc94c3153d51..a7eee815f1215 100644
--- a/security/apparmor/resource.c
+++ b/security/apparmor/resource.c
@@ -201,6 +201,11 @@ void __aa_transition_rlimits(struct aa_label *old_l, struct aa_label *new_l)
rules->rlimits.limits[j].rlim_max);
/* soft limit should not exceed hard limit */
rlim->rlim_cur = min(rlim->rlim_cur, rlim->rlim_max);
+ if (j == RLIMIT_CPU &&
+ rlim->rlim_cur != RLIM_INFINITY &&
+ IS_ENABLED(CONFIG_POSIX_TIMERS))
+ (void) update_rlimit_cpu(current->group_leader,
+ rlim->rlim_cur);
}
}
}
diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
index 7c06ffd633d24..c588af7cc5f87 100644
--- a/security/integrity/evm/evm_crypto.c
+++ b/security/integrity/evm/evm_crypto.c
@@ -401,6 +401,7 @@ int evm_init_hmac(struct inode *inode, const struct xattr *xattrs,
{
struct shash_desc *desc;
const struct xattr *xattr;
+ struct xattr_list *xattr_entry;

desc = init_desc(EVM_XATTR_HMAC, HASH_ALGO_SHA1);
if (IS_ERR(desc)) {
@@ -408,11 +409,16 @@ int evm_init_hmac(struct inode *inode, const struct xattr *xattrs,
return PTR_ERR(desc);
}

- for (xattr = xattrs; xattr->name; xattr++) {
- if (!evm_protected_xattr(xattr->name))
- continue;
+ list_for_each_entry_lockless(xattr_entry, &evm_config_xattrnames,
+ list) {
+ for (xattr = xattrs; xattr->name; xattr++) {
+ if (strcmp(xattr_entry->name +
+ XATTR_SECURITY_PREFIX_LEN, xattr->name) != 0)
+ continue;

- crypto_shash_update(desc, xattr->value, xattr->value_len);
+ crypto_shash_update(desc, xattr->value,
+ xattr->value_len);
+ }
}

hmac_add_misc(desc, inode, EVM_XATTR_HMAC, hmac_val);
diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
index 1e35c9f807b2b..109ad155ffc2a 100644
--- a/security/smack/smackfs.c
+++ b/security/smack/smackfs.c
@@ -68,6 +68,7 @@ enum smk_inos {
static DEFINE_MUTEX(smack_cipso_lock);
static DEFINE_MUTEX(smack_ambient_lock);
static DEFINE_MUTEX(smk_net4addr_lock);
+static DEFINE_MUTEX(smk_cipso_doi_lock);
#if IS_ENABLED(CONFIG_IPV6)
static DEFINE_MUTEX(smk_net6addr_lock);
#endif /* CONFIG_IPV6 */
@@ -139,7 +140,7 @@ struct smack_parsed_rule {
int smk_access2;
};

-static int smk_cipso_doi_value = SMACK_CIPSO_DOI_DEFAULT;
+static u32 smk_cipso_doi_value = CIPSO_V4_DOI_UNKNOWN;

/*
* Values for parsing cipso rules
@@ -679,43 +680,60 @@ static const struct file_operations smk_load_ops = {
};

/**
- * smk_cipso_doi - initialize the CIPSO domain
+ * smk_cipso_doi - set netlabel maps
+ * @ndoi: new value for our CIPSO DOI
+ * @gfp_flags: kmalloc allocation context
*/
-static void smk_cipso_doi(void)
+static int
+smk_cipso_doi(u32 ndoi, gfp_t gfp_flags)
{
- int rc;
+ int rc = 0;
struct cipso_v4_doi *doip;
struct netlbl_audit nai;

- smk_netlabel_audit_set(&nai);
+ mutex_lock(&smk_cipso_doi_lock);

- rc = netlbl_cfg_map_del(NULL, PF_INET, NULL, NULL, &nai);
- if (rc != 0)
- printk(KERN_WARNING "%s:%d remove rc = %d\n",
- __func__, __LINE__, rc);
+ if (smk_cipso_doi_value == ndoi)
+ goto clr_doi_lock;
+
+ smk_netlabel_audit_set(&nai);

- doip = kmalloc(sizeof(struct cipso_v4_doi), GFP_KERNEL | __GFP_NOFAIL);
+ doip = kmalloc(sizeof(struct cipso_v4_doi), gfp_flags);
+ if (!doip) {
+ rc = -ENOMEM;
+ goto clr_doi_lock;
+ }
doip->map.std = NULL;
- doip->doi = smk_cipso_doi_value;
+ doip->doi = ndoi;
doip->type = CIPSO_V4_MAP_PASS;
doip->tags[0] = CIPSO_V4_TAG_RBITMAP;
for (rc = 1; rc < CIPSO_V4_TAG_MAXCNT; rc++)
doip->tags[rc] = CIPSO_V4_TAG_INVALID;

rc = netlbl_cfg_cipsov4_add(doip, &nai);
- if (rc != 0) {
- printk(KERN_WARNING "%s:%d cipso add rc = %d\n",
- __func__, __LINE__, rc);
+ if (rc) {
kfree(doip);
- return;
+ goto clr_doi_lock;
}
- rc = netlbl_cfg_cipsov4_map_add(doip->doi, NULL, NULL, NULL, &nai);
- if (rc != 0) {
- printk(KERN_WARNING "%s:%d map add rc = %d\n",
- __func__, __LINE__, rc);
- netlbl_cfg_cipsov4_del(doip->doi, &nai);
- return;
+
+ if (smk_cipso_doi_value != CIPSO_V4_DOI_UNKNOWN) {
+ rc = netlbl_cfg_map_del(NULL, PF_INET, NULL, NULL, &nai);
+ if (rc && rc != -ENOENT)
+ goto clr_ndoi_def;
+
+ netlbl_cfg_cipsov4_del(smk_cipso_doi_value, &nai);
}
+
+ rc = netlbl_cfg_cipsov4_map_add(ndoi, NULL, NULL, NULL, &nai);
+ if (rc) {
+ smk_cipso_doi_value = CIPSO_V4_DOI_UNKNOWN; // no default map
+clr_ndoi_def: netlbl_cfg_cipsov4_del(ndoi, &nai);
+ } else
+ smk_cipso_doi_value = ndoi;
+
+clr_doi_lock:
+ mutex_unlock(&smk_cipso_doi_lock);
+ return rc;
}

/**
@@ -1580,7 +1598,7 @@ static ssize_t smk_read_doi(struct file *filp, char __user *buf,
if (*ppos != 0)
return 0;

- sprintf(temp, "%d", smk_cipso_doi_value);
+ sprintf(temp, "%lu", (unsigned long)smk_cipso_doi_value);
rc = simple_read_from_buffer(buf, count, ppos, temp, strlen(temp));

return rc;
@@ -1599,7 +1617,7 @@ static ssize_t smk_write_doi(struct file *file, const char __user *buf,
size_t count, loff_t *ppos)
{
char temp[80];
- int i;
+ unsigned long u;

if (!smack_privileged(CAP_MAC_ADMIN))
return -EPERM;
@@ -1612,14 +1630,13 @@ static ssize_t smk_write_doi(struct file *file, const char __user *buf,

temp[count] = '\0';

- if (sscanf(temp, "%d", &i) != 1)
+ if (kstrtoul(temp, 10, &u))
return -EINVAL;

- smk_cipso_doi_value = i;
-
- smk_cipso_doi();
+ if (u == CIPSO_V4_DOI_UNKNOWN || u > U32_MAX)
+ return -EINVAL;

- return count;
+ return smk_cipso_doi(u, GFP_KERNEL) ? : count;
}

static const struct file_operations smk_doi_ops = {
@@ -2996,6 +3013,7 @@ static int __init init_smk_fs(void)
{
int err;
int rc;
+ struct netlbl_audit nai;

if (smack_enabled == 0)
return 0;
@@ -3014,7 +3032,10 @@ static int __init init_smk_fs(void)
}
}

- smk_cipso_doi();
+ smk_netlabel_audit_set(&nai);
+ (void) netlbl_cfg_map_del(NULL, PF_INET, NULL, NULL, &nai);
+ (void) smk_cipso_doi(SMACK_CIPSO_DOI_DEFAULT,
+ GFP_KERNEL | __GFP_NOFAIL);
smk_unlbl_ambient(NULL);

rc = smack_populate_secattr(&smack_known_floor);
diff --git a/sound/core/oss/mixer_oss.c b/sound/core/oss/mixer_oss.c
index 05fc8911479c1..a6cc74bb25fd4 100644
--- a/sound/core/oss/mixer_oss.c
+++ b/sound/core/oss/mixer_oss.c
@@ -525,6 +525,8 @@ static void snd_mixer_oss_get_volume1_vol(struct snd_mixer_oss_file *fmixer,
if (numid == ID_UNKNOWN)
return;
guard(rwsem_read)(&card->controls_rwsem);
+ if (card->shutdown)
+ return;
kctl = snd_ctl_find_numid(card, numid);
if (!kctl)
return;
@@ -558,6 +560,8 @@ static void snd_mixer_oss_get_volume1_sw(struct snd_mixer_oss_file *fmixer,
if (numid == ID_UNKNOWN)
return;
guard(rwsem_read)(&card->controls_rwsem);
+ if (card->shutdown)
+ return;
kctl = snd_ctl_find_numid(card, numid);
if (!kctl)
return;
@@ -618,6 +622,8 @@ static void snd_mixer_oss_put_volume1_vol(struct snd_mixer_oss_file *fmixer,
if (numid == ID_UNKNOWN)
return;
guard(rwsem_read)(&card->controls_rwsem);
+ if (card->shutdown)
+ return;
kctl = snd_ctl_find_numid(card, numid);
if (!kctl)
return;
@@ -655,6 +661,8 @@ static void snd_mixer_oss_put_volume1_sw(struct snd_mixer_oss_file *fmixer,
if (numid == ID_UNKNOWN)
return;
guard(rwsem_read)(&card->controls_rwsem);
+ if (card->shutdown)
+ return;
kctl = snd_ctl_find_numid(card, numid);
if (!kctl)
return;
@@ -792,6 +800,8 @@ static int snd_mixer_oss_get_recsrc2(struct snd_mixer_oss_file *fmixer, unsigned
if (uinfo == NULL || uctl == NULL)
return -ENOMEM;
guard(rwsem_read)(&card->controls_rwsem);
+ if (card->shutdown)
+ return -ENODEV;
kctl = snd_mixer_oss_test_id(mixer, "Capture Source", 0);
if (!kctl)
return -ENOENT;
@@ -835,6 +845,8 @@ static int snd_mixer_oss_put_recsrc2(struct snd_mixer_oss_file *fmixer, unsigned
if (uinfo == NULL || uctl == NULL)
return -ENOMEM;
guard(rwsem_read)(&card->controls_rwsem);
+ if (card->shutdown)
+ return -ENODEV;
kctl = snd_mixer_oss_test_id(mixer, "Capture Source", 0);
if (!kctl)
return -ENOENT;
@@ -878,6 +890,8 @@ static int snd_mixer_oss_build_test(struct snd_mixer_oss *mixer, struct slot *sl
int err;

scoped_guard(rwsem_read, &card->controls_rwsem) {
+ if (card->shutdown)
+ return -ENODEV;
kcontrol = snd_mixer_oss_test_id(mixer, name, index);
if (kcontrol == NULL)
return 0;
@@ -1002,6 +1016,8 @@ static int snd_mixer_oss_build_input(struct snd_mixer_oss *mixer,
if (snd_mixer_oss_build_test_all(mixer, ptr, &slot))
return 0;
guard(rwsem_read)(&mixer->card->controls_rwsem);
+ if (mixer->card->shutdown)
+ return -ENODEV;
kctl = NULL;
if (!ptr->index)
kctl = snd_mixer_oss_test_id(mixer, "Capture Source", 0);
diff --git a/sound/core/pcm.c b/sound/core/pcm.c
index 290690fc2abcb..ff1e9f8c1ecae 100644
--- a/sound/core/pcm.c
+++ b/sound/core/pcm.c
@@ -328,13 +328,13 @@ static const char *snd_pcm_oss_format_name(int format)
static void snd_pcm_proc_info_read(struct snd_pcm_substream *substream,
struct snd_info_buffer *buffer)
{
- struct snd_pcm_info *info __free(kfree) = NULL;
int err;

if (! substream)
return;

- info = kmalloc(sizeof(*info), GFP_KERNEL);
+ struct snd_pcm_info *info __free(kfree) =
+ kmalloc(sizeof(*info), GFP_KERNEL);
if (!info)
return;

diff --git a/sound/core/pcm_compat.c b/sound/core/pcm_compat.c
index a42ec7f5a1daf..c1c64da2eabd0 100644
--- a/sound/core/pcm_compat.c
+++ b/sound/core/pcm_compat.c
@@ -235,7 +235,6 @@ static int snd_pcm_ioctl_hw_params_compat(struct snd_pcm_substream *substream,
int refine,
struct snd_pcm_hw_params32 __user *data32)
{
- struct snd_pcm_hw_params *data __free(kfree) = NULL;
struct snd_pcm_runtime *runtime;
int err;

@@ -243,7 +242,8 @@ static int snd_pcm_ioctl_hw_params_compat(struct snd_pcm_substream *substream,
if (!runtime)
return -ENOTTY;

- data = kmalloc(sizeof(*data), GFP_KERNEL);
+ struct snd_pcm_hw_params *data __free(kfree) =
+ kmalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;

@@ -332,7 +332,6 @@ static int snd_pcm_ioctl_xfern_compat(struct snd_pcm_substream *substream,
compat_caddr_t buf;
compat_caddr_t __user *bufptr;
u32 frames;
- void __user **bufs __free(kfree) = NULL;
int err, ch, i;

if (! substream->runtime)
@@ -349,7 +348,9 @@ static int snd_pcm_ioctl_xfern_compat(struct snd_pcm_substream *substream,
get_user(frames, &data32->frames))
return -EFAULT;
bufptr = compat_ptr(buf);
- bufs = kmalloc_array(ch, sizeof(void __user *), GFP_KERNEL);
+
+ void __user **bufs __free(kfree) =
+ kmalloc_array(ch, sizeof(void __user *), GFP_KERNEL);
if (bufs == NULL)
return -ENOMEM;
for (i = 0; i < ch; i++) {
diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
index 6417178ca0978..155df14e86266 100644
--- a/sound/core/pcm_native.c
+++ b/sound/core/pcm_native.c
@@ -242,10 +242,10 @@ int snd_pcm_info(struct snd_pcm_substream *substream, struct snd_pcm_info *info)
int snd_pcm_info_user(struct snd_pcm_substream *substream,
struct snd_pcm_info __user * _info)
{
- struct snd_pcm_info *info __free(kfree) = NULL;
int err;
+ struct snd_pcm_info *info __free(kfree) =
+ kmalloc(sizeof(*info), GFP_KERNEL);

- info = kmalloc(sizeof(*info), GFP_KERNEL);
if (! info)
return -ENOMEM;
err = snd_pcm_info(substream, info);
@@ -364,7 +364,6 @@ static int constrain_params_by_rules(struct snd_pcm_substream *substream,
struct snd_pcm_hw_constraints *constrs =
&substream->runtime->hw_constraints;
unsigned int k;
- unsigned int *rstamps __free(kfree) = NULL;
unsigned int vstamps[SNDRV_PCM_HW_PARAM_LAST_INTERVAL + 1];
unsigned int stamp;
struct snd_pcm_hw_rule *r;
@@ -380,7 +379,8 @@ static int constrain_params_by_rules(struct snd_pcm_substream *substream,
* Each member of 'rstamps' array represents the sequence number of
* recent application of corresponding rule.
*/
- rstamps = kcalloc(constrs->rules_num, sizeof(unsigned int), GFP_KERNEL);
+ unsigned int *rstamps __free(kfree) =
+ kcalloc(constrs->rules_num, sizeof(unsigned int), GFP_KERNEL);
if (!rstamps)
return -ENOMEM;

@@ -583,10 +583,10 @@ EXPORT_SYMBOL(snd_pcm_hw_refine);
static int snd_pcm_hw_refine_user(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params __user * _params)
{
- struct snd_pcm_hw_params *params __free(kfree) = NULL;
int err;
+ struct snd_pcm_hw_params *params __free(kfree) =
+ memdup_user(_params, sizeof(*params));

- params = memdup_user(_params, sizeof(*params));
if (IS_ERR(params))
return PTR_ERR(params);

@@ -889,10 +889,10 @@ static int snd_pcm_hw_params(struct snd_pcm_substream *substream,
static int snd_pcm_hw_params_user(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params __user * _params)
{
- struct snd_pcm_hw_params *params __free(kfree) = NULL;
int err;
+ struct snd_pcm_hw_params *params __free(kfree) =
+ memdup_user(_params, sizeof(*params));

- params = memdup_user(_params, sizeof(*params));
if (IS_ERR(params))
return PTR_ERR(params);

@@ -2267,7 +2267,6 @@ static int snd_pcm_link(struct snd_pcm_substream *substream, int fd)
{
struct snd_pcm_file *pcm_file;
struct snd_pcm_substream *substream1;
- struct snd_pcm_group *group __free(kfree) = NULL;
struct snd_pcm_group *target_group;
bool nonatomic = substream->pcm->nonatomic;
CLASS(fd, f)(fd);
@@ -2283,7 +2282,8 @@ static int snd_pcm_link(struct snd_pcm_substream *substream, int fd)
if (substream == substream1)
return -EINVAL;

- group = kzalloc(sizeof(*group), GFP_KERNEL);
+ struct snd_pcm_group *group __free(kfree) =
+ kzalloc(sizeof(*group), GFP_KERNEL);
if (!group)
return -ENOMEM;
snd_pcm_group_init(group);
@@ -3277,7 +3277,7 @@ static int snd_pcm_xfern_frames_ioctl(struct snd_pcm_substream *substream,
if (copy_from_user(&xfern, _xfern, sizeof(xfern)))
return -EFAULT;

- bufs = memdup_user(xfern.bufs, sizeof(void *) * runtime->channels);
+ bufs = memdup_array_user(xfern.bufs, runtime->channels, sizeof(void *));
if (IS_ERR(bufs))
return PTR_ERR(bufs);
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
@@ -3551,7 +3551,6 @@ static ssize_t snd_pcm_readv(struct kiocb *iocb, struct iov_iter *to)
struct snd_pcm_runtime *runtime;
snd_pcm_sframes_t result;
unsigned long i;
- void __user **bufs __free(kfree) = NULL;
snd_pcm_uframes_t frames;
const struct iovec *iov = iter_iov(to);

@@ -3570,7 +3569,9 @@ static ssize_t snd_pcm_readv(struct kiocb *iocb, struct iov_iter *to)
if (!frame_aligned(runtime, iov->iov_len))
return -EINVAL;
frames = bytes_to_samples(runtime, iov->iov_len);
- bufs = kmalloc_array(to->nr_segs, sizeof(void *), GFP_KERNEL);
+
+ void __user **bufs __free(kfree) =
+ kmalloc_array(to->nr_segs, sizeof(void *), GFP_KERNEL);
if (bufs == NULL)
return -ENOMEM;
for (i = 0; i < to->nr_segs; ++i) {
@@ -3590,7 +3591,6 @@ static ssize_t snd_pcm_writev(struct kiocb *iocb, struct iov_iter *from)
struct snd_pcm_runtime *runtime;
snd_pcm_sframes_t result;
unsigned long i;
- void __user **bufs __free(kfree) = NULL;
snd_pcm_uframes_t frames;
const struct iovec *iov = iter_iov(from);

@@ -3608,7 +3608,9 @@ static ssize_t snd_pcm_writev(struct kiocb *iocb, struct iov_iter *from)
!frame_aligned(runtime, iov->iov_len))
return -EINVAL;
frames = bytes_to_samples(runtime, iov->iov_len);
- bufs = kmalloc_array(from->nr_segs, sizeof(void *), GFP_KERNEL);
+
+ void __user **bufs __free(kfree) =
+ kmalloc_array(from->nr_segs, sizeof(void *), GFP_KERNEL);
if (bufs == NULL)
return -ENOMEM;
for (i = 0; i < from->nr_segs; ++i) {
@@ -4060,15 +4062,15 @@ static void snd_pcm_hw_convert_to_old_params(struct snd_pcm_hw_params_old *opara
static int snd_pcm_hw_refine_old_user(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params_old __user * _oparams)
{
- struct snd_pcm_hw_params *params __free(kfree) = NULL;
- struct snd_pcm_hw_params_old *oparams __free(kfree) = NULL;
int err;

- params = kmalloc(sizeof(*params), GFP_KERNEL);
+ struct snd_pcm_hw_params *params __free(kfree) =
+ kmalloc(sizeof(*params), GFP_KERNEL);
if (!params)
return -ENOMEM;

- oparams = memdup_user(_oparams, sizeof(*oparams));
+ struct snd_pcm_hw_params_old *oparams __free(kfree) =
+ memdup_user(_oparams, sizeof(*oparams));
if (IS_ERR(oparams))
return PTR_ERR(oparams);
snd_pcm_hw_convert_from_old_params(params, oparams);
@@ -4089,15 +4091,15 @@ static int snd_pcm_hw_refine_old_user(struct snd_pcm_substream *substream,
static int snd_pcm_hw_params_old_user(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params_old __user * _oparams)
{
- struct snd_pcm_hw_params *params __free(kfree) = NULL;
- struct snd_pcm_hw_params_old *oparams __free(kfree) = NULL;
int err;

- params = kmalloc(sizeof(*params), GFP_KERNEL);
+ struct snd_pcm_hw_params *params __free(kfree) =
+ kmalloc(sizeof(*params), GFP_KERNEL);
if (!params)
return -ENOMEM;

- oparams = memdup_user(_oparams, sizeof(*oparams));
+ struct snd_pcm_hw_params_old *oparams __free(kfree) =
+ memdup_user(_oparams, sizeof(*oparams));
if (IS_ERR(oparams))
return PTR_ERR(oparams);

diff --git a/sound/core/vmaster.c b/sound/core/vmaster.c
index c657659b236c4..76cc64245f5df 100644
--- a/sound/core/vmaster.c
+++ b/sound/core/vmaster.c
@@ -56,10 +56,10 @@ struct link_follower {

static int follower_update(struct link_follower *follower)
{
- struct snd_ctl_elem_value *uctl __free(kfree) = NULL;
int err, ch;
+ struct snd_ctl_elem_value *uctl __free(kfree) =
+ kzalloc(sizeof(*uctl), GFP_KERNEL);

- uctl = kzalloc(sizeof(*uctl), GFP_KERNEL);
if (!uctl)
return -ENOMEM;
uctl->id = follower->follower.id;
@@ -74,7 +74,6 @@ static int follower_update(struct link_follower *follower)
/* get the follower ctl info and save the initial values */
static int follower_init(struct link_follower *follower)
{
- struct snd_ctl_elem_info *uinfo __free(kfree) = NULL;
int err;

if (follower->info.count) {
@@ -84,7 +83,8 @@ static int follower_init(struct link_follower *follower)
return 0;
}

- uinfo = kmalloc(sizeof(*uinfo), GFP_KERNEL);
+ struct snd_ctl_elem_info *uinfo __free(kfree) =
+ kmalloc(sizeof(*uinfo), GFP_KERNEL);
if (!uinfo)
return -ENOMEM;
uinfo->id = follower->follower.id;
@@ -341,9 +341,9 @@ static int master_get(struct snd_kcontrol *kcontrol,
static int sync_followers(struct link_master *master, int old_val, int new_val)
{
struct link_follower *follower;
- struct snd_ctl_elem_value *uval __free(kfree) = NULL;
+ struct snd_ctl_elem_value *uval __free(kfree) =
+ kmalloc(sizeof(*uval), GFP_KERNEL);

- uval = kmalloc(sizeof(*uval), GFP_KERNEL);
if (!uval)
return -ENOMEM;
list_for_each_entry(follower, &master->followers, list) {
diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
index 84ab357b840d6..482e801a496a1 100644
--- a/sound/pci/hda/patch_conexant.c
+++ b/sound/pci/hda/patch_conexant.c
@@ -1123,6 +1123,7 @@ static const struct hda_quirk cxt5066_fixups[] = {
SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004),
SND_PCI_QUIRK(0x1c06, 0x2012, "Lemote A1205", CXT_PINCFG_LEMOTE_A1205),
+ SND_PCI_QUIRK(0x1d05, 0x3012, "MECHREVO Wujie 15X Pro", CXT_FIXUP_HEADSET_MIC),
HDA_CODEC_QUIRK(0x2782, 0x12c3, "Sirius Gen1", CXT_PINCFG_TOP_SPEAKER),
HDA_CODEC_QUIRK(0x2782, 0x12c5, "Sirius Gen2", CXT_PINCFG_TOP_SPEAKER),
{}
diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
index 5c2f442fca79a..85178a0303a57 100644
--- a/sound/pci/hda/patch_realtek.c
+++ b/sound/pci/hda/patch_realtek.c
@@ -4763,6 +4763,22 @@ static void alc245_fixup_hp_mute_led_v1_coefbit(struct hda_codec *codec,
}
}

+static void alc245_fixup_hp_mute_led_v2_coefbit(struct hda_codec *codec,
+ const struct hda_fixup *fix,
+ int action)
+{
+ struct alc_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ spec->mute_led_polarity = 0;
+ spec->mute_led_coef.idx = 0x0b;
+ spec->mute_led_coef.mask = 1 << 3;
+ spec->mute_led_coef.on = 1 << 3;
+ spec->mute_led_coef.off = 0;
+ snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set);
+ }
+}
+
/* turn on/off mic-mute LED per capture hook by coef bit */
static int coef_micmute_led_set(struct led_classdev *led_cdev,
enum led_brightness brightness)
@@ -4842,6 +4858,13 @@ static void alc285_fixup_hp_spectre_x360_mute_led(struct hda_codec *codec,
alc285_fixup_hp_gpio_micmute_led(codec, fix, action);
}

+static void alc245_fixup_hp_envy_x360_mute_led(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ alc245_fixup_hp_mute_led_v1_coefbit(codec, fix, action);
+ alc245_fixup_hp_gpio_led(codec, fix, action);
+}
+
static void alc236_fixup_hp_mute_led(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
@@ -5024,6 +5047,163 @@ static void alc298_samsung_v2_init_amps(struct hda_codec *codec,
spec->gen.pcm_playback_hook = alc298_samsung_v2_playback_hook;
}

+/* LG Gram Style 14: program vendor coef sequence used by HDA-verb workaround */
+struct alc298_lg_gram_style_seq {
+ unsigned short verb;
+ unsigned short idx;
+ unsigned short val;
+};
+
+static void alc298_lg_gram_style_coef_write(struct hda_codec *codec,
+ unsigned int verb,
+ unsigned int idx,
+ unsigned int val)
+{
+ snd_hda_codec_write(codec, 0x20, 0, AC_VERB_SET_COEF_INDEX, 0x23);
+ snd_hda_codec_write(codec, 0x20, 0, verb, idx);
+ snd_hda_codec_write(codec, 0x20, 0, AC_VERB_SET_PROC_COEF, 0x00);
+ snd_hda_codec_write(codec, 0x20, 0, AC_VERB_SET_PROC_COEF, val);
+ snd_hda_codec_write(codec, 0x20, 0, AC_VERB_SET_PROC_COEF, 0xb011);
+}
+
+static void alc298_lg_gram_style_run_seq(struct hda_codec *codec,
+ const struct alc298_lg_gram_style_seq *seq,
+ int seq_size)
+{
+ int i;
+
+ for (i = 0; i < seq_size; i++)
+ alc298_lg_gram_style_coef_write(codec, seq[i].verb,
+ seq[i].idx, seq[i].val);
+}
+
+/* Coef sequences derived from the HDA-verb workaround for this model. */
+static const struct alc298_lg_gram_style_seq alc298_lg_gram_style_preinit_seq[] = {
+ { 0x420, 0x00, 0x01 },
+};
+
+static const struct alc298_lg_gram_style_seq alc298_lg_gram_style_disable_seq[] = {
+ { 0x423, 0xff, 0x00 },
+ { 0x420, 0x3a, 0x80 },
+};
+
+static const struct alc298_lg_gram_style_seq alc298_lg_gram_style_enable_seq[] = {
+ { 0x420, 0x3a, 0x81 },
+ { 0x423, 0xff, 0x01 },
+};
+
+static const struct alc298_lg_gram_style_seq alc298_lg_gram_style_init_seq_38[] = {
+ { 0x423, 0xe1, 0x00 }, { 0x420, 0x12, 0x6f }, { 0x420, 0x14, 0x00 },
+ { 0x420, 0x1b, 0x01 }, { 0x420, 0x1d, 0x01 }, { 0x420, 0x1f, 0xfe },
+ { 0x420, 0x21, 0x00 }, { 0x420, 0x22, 0x10 }, { 0x420, 0x3d, 0x05 },
+ { 0x420, 0x3f, 0x03 }, { 0x420, 0x50, 0x2c }, { 0x420, 0x76, 0x0e },
+ { 0x420, 0x7c, 0x4a }, { 0x420, 0x81, 0x03 }, { 0x423, 0x99, 0x03 },
+ { 0x423, 0xa4, 0xb5 }, { 0x423, 0xa5, 0x01 }, { 0x423, 0xba, 0x94 },
+};
+
+static const struct alc298_lg_gram_style_seq alc298_lg_gram_style_init_seq_39[] = {
+ { 0x423, 0xe1, 0x00 }, { 0x420, 0x12, 0x6f }, { 0x420, 0x14, 0x00 },
+ { 0x420, 0x1b, 0x02 }, { 0x420, 0x1d, 0x02 }, { 0x420, 0x1f, 0xfd },
+ { 0x420, 0x21, 0x01 }, { 0x420, 0x22, 0x10 }, { 0x420, 0x3d, 0x05 },
+ { 0x420, 0x3f, 0x03 }, { 0x420, 0x50, 0x2c }, { 0x420, 0x76, 0x0e },
+ { 0x420, 0x7c, 0x4a }, { 0x420, 0x81, 0x03 }, { 0x423, 0x99, 0x03 },
+ { 0x423, 0xa4, 0xb5 }, { 0x423, 0xa5, 0x01 }, { 0x423, 0xba, 0x94 },
+};
+
+static const struct alc298_lg_gram_style_seq alc298_lg_gram_style_init_seq_3c[] = {
+ { 0x423, 0xe1, 0x00 }, { 0x420, 0x12, 0x6f }, { 0x420, 0x14, 0x00 },
+ { 0x420, 0x1b, 0x01 }, { 0x420, 0x1d, 0x01 }, { 0x420, 0x1f, 0xfe },
+ { 0x420, 0x21, 0x00 }, { 0x420, 0x22, 0x10 }, { 0x420, 0x3d, 0x05 },
+ { 0x420, 0x3f, 0x03 }, { 0x420, 0x50, 0x2c }, { 0x420, 0x76, 0x0e },
+ { 0x420, 0x7c, 0x4a }, { 0x420, 0x81, 0x03 }, { 0x423, 0xba, 0x8d },
+};
+
+static const struct alc298_lg_gram_style_seq alc298_lg_gram_style_init_seq_3d[] = {
+ { 0x423, 0xe1, 0x00 }, { 0x420, 0x12, 0x6f }, { 0x420, 0x14, 0x00 },
+ { 0x420, 0x1b, 0x02 }, { 0x420, 0x1d, 0x02 }, { 0x420, 0x1f, 0xfd },
+ { 0x420, 0x21, 0x01 }, { 0x420, 0x22, 0x10 }, { 0x420, 0x3d, 0x05 },
+ { 0x420, 0x3f, 0x03 }, { 0x420, 0x50, 0x2c }, { 0x420, 0x76, 0x0e },
+ { 0x420, 0x7c, 0x4a }, { 0x420, 0x81, 0x03 }, { 0x423, 0xba, 0x8d },
+};
+
+struct alc298_lg_gram_style_amp_desc {
+ unsigned char nid;
+ const struct alc298_lg_gram_style_seq *init_seq;
+ int init_seq_size;
+};
+
+static const struct alc298_lg_gram_style_amp_desc alc298_lg_gram_style_amps[] = {
+ { 0x38, alc298_lg_gram_style_init_seq_38,
+ ARRAY_SIZE(alc298_lg_gram_style_init_seq_38) },
+ { 0x39, alc298_lg_gram_style_init_seq_39,
+ ARRAY_SIZE(alc298_lg_gram_style_init_seq_39) },
+ { 0x3c, alc298_lg_gram_style_init_seq_3c,
+ ARRAY_SIZE(alc298_lg_gram_style_init_seq_3c) },
+ { 0x3d, alc298_lg_gram_style_init_seq_3d,
+ ARRAY_SIZE(alc298_lg_gram_style_init_seq_3d) },
+};
+
+static void alc298_lg_gram_style_enable_amps(struct hda_codec *codec)
+{
+ struct alc_spec *spec = codec->spec;
+ int i;
+
+ for (i = 0; i < spec->num_speaker_amps; i++) {
+ alc_write_coef_idx(codec, 0x22, alc298_lg_gram_style_amps[i].nid);
+ alc298_lg_gram_style_run_seq(codec,
+ alc298_lg_gram_style_enable_seq,
+ ARRAY_SIZE(alc298_lg_gram_style_enable_seq));
+ }
+}
+
+static void alc298_lg_gram_style_disable_amps(struct hda_codec *codec)
+{
+ struct alc_spec *spec = codec->spec;
+ int i;
+
+ for (i = 0; i < spec->num_speaker_amps; i++) {
+ alc_write_coef_idx(codec, 0x22, alc298_lg_gram_style_amps[i].nid);
+ alc298_lg_gram_style_run_seq(codec,
+ alc298_lg_gram_style_disable_seq,
+ ARRAY_SIZE(alc298_lg_gram_style_disable_seq));
+ }
+}
+
+static void alc298_lg_gram_style_playback_hook(struct hda_pcm_stream *hinfo,
+ struct hda_codec *codec,
+ struct snd_pcm_substream *substream,
+ int action)
+{
+ if (action == HDA_GEN_PCM_ACT_OPEN)
+ alc298_lg_gram_style_enable_amps(codec);
+ if (action == HDA_GEN_PCM_ACT_CLOSE)
+ alc298_lg_gram_style_disable_amps(codec);
+}
+
+static void alc298_lg_gram_style_init_amps(struct hda_codec *codec)
+{
+ struct alc_spec *spec = codec->spec;
+ int i;
+
+ spec->num_speaker_amps = ARRAY_SIZE(alc298_lg_gram_style_amps);
+
+ for (i = 0; i < spec->num_speaker_amps; i++) {
+ alc_write_coef_idx(codec, 0x22, alc298_lg_gram_style_amps[i].nid);
+ alc298_lg_gram_style_run_seq(codec,
+ alc298_lg_gram_style_preinit_seq,
+ ARRAY_SIZE(alc298_lg_gram_style_preinit_seq));
+ alc298_lg_gram_style_run_seq(codec,
+ alc298_lg_gram_style_disable_seq,
+ ARRAY_SIZE(alc298_lg_gram_style_disable_seq));
+ alc298_lg_gram_style_run_seq(codec,
+ alc298_lg_gram_style_amps[i].init_seq,
+ alc298_lg_gram_style_amps[i].init_seq_size);
+ alc_write_coef_idx(codec, 0x89, 0x0);
+ }
+
+ spec->gen.pcm_playback_hook = alc298_lg_gram_style_playback_hook;
+}
+
static void alc298_fixup_samsung_amp_v2_2_amps(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
@@ -5038,6 +5218,13 @@ static void alc298_fixup_samsung_amp_v2_4_amps(struct hda_codec *codec,
alc298_samsung_v2_init_amps(codec, 4);
}

+static void alc298_fixup_lg_gram_style_14(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ if (action == HDA_FIXUP_ACT_PROBE)
+ alc298_lg_gram_style_init_amps(codec);
+}
+
static void gpio2_mic_hotkey_event(struct hda_codec *codec,
struct hda_jack_callback *event)
{
@@ -7907,6 +8094,7 @@ enum {
ALC285_FIXUP_HP_GPIO_LED,
ALC285_FIXUP_HP_MUTE_LED,
ALC285_FIXUP_HP_SPECTRE_X360_MUTE_LED,
+ ALC245_FIXUP_HP_ENVY_X360_MUTE_LED,
ALC285_FIXUP_HP_BEEP_MICMUTE_LED,
ALC236_FIXUP_HP_MUTE_LED_COEFBIT2,
ALC236_FIXUP_HP_GPIO_LED,
@@ -7916,6 +8104,7 @@ enum {
ALC298_FIXUP_SAMSUNG_AMP,
ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS,
ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS,
+ ALC298_FIXUP_LG_GRAM_STYLE_14,
ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
@@ -7992,6 +8181,7 @@ enum {
ALC287_FIXUP_YOGA7_14ARB7_I2C,
ALC245_FIXUP_HP_MUTE_LED_COEFBIT,
ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT,
+ ALC245_FIXUP_HP_MUTE_LED_V2_COEFBIT,
ALC245_FIXUP_HP_X360_MUTE_LEDS,
ALC287_FIXUP_THINKPAD_I2S_SPK,
ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD,
@@ -9517,6 +9707,10 @@ static const struct hda_fixup alc269_fixups[] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc285_fixup_hp_spectre_x360_mute_led,
},
+ [ALC245_FIXUP_HP_ENVY_X360_MUTE_LED] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc245_fixup_hp_envy_x360_mute_led,
+ },
[ALC285_FIXUP_HP_BEEP_MICMUTE_LED] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc285_fixup_hp_beep,
@@ -9563,6 +9757,10 @@ static const struct hda_fixup alc269_fixups[] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc298_fixup_samsung_amp_v2_4_amps
},
+ [ALC298_FIXUP_LG_GRAM_STYLE_14] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc298_fixup_lg_gram_style_14
+ },
[ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
.type = HDA_FIXUP_VERBS,
.v.verbs = (const struct hda_verb[]) {
@@ -10268,6 +10466,10 @@ static const struct hda_fixup alc269_fixups[] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc245_fixup_hp_mute_led_v1_coefbit,
},
+ [ALC245_FIXUP_HP_MUTE_LED_V2_COEFBIT] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc245_fixup_hp_mute_led_v2_coefbit,
+ },
[ALC245_FIXUP_HP_X360_MUTE_LEDS] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc245_fixup_hp_mute_led_coefbit,
@@ -10692,8 +10894,10 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x8895, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED),
SND_PCI_QUIRK(0x103c, 0x8896, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_MUTE_LED),
SND_PCI_QUIRK(0x103c, 0x8898, "HP EliteBook 845 G8 Notebook PC", ALC285_FIXUP_HP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x103c, 0x88b3, "HP ENVY x360 Convertible 15-es0xxx", ALC245_FIXUP_HP_ENVY_X360_MUTE_LED),
SND_PCI_QUIRK(0x103c, 0x88d0, "HP Pavilion 15-eh1xxx (mainboard 88D0)", ALC287_FIXUP_HP_GPIO_LED),
SND_PCI_QUIRK(0x103c, 0x88dd, "HP Pavilion 15z-ec200", ALC285_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x88eb, "HP Victus 16-e0xxx", ALC245_FIXUP_HP_MUTE_LED_V2_COEFBIT),
SND_PCI_QUIRK(0x103c, 0x8902, "HP OMEN 16", ALC285_FIXUP_HP_MUTE_LED),
SND_PCI_QUIRK(0x103c, 0x890e, "HP 255 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
SND_PCI_QUIRK(0x103c, 0x8919, "HP Pavilion Aero Laptop 13-be0xxx", ALC287_FIXUP_HP_GPIO_LED),
@@ -11356,6 +11560,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1854, 0x0488, "LG gram 16 (16Z90R)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
SND_PCI_QUIRK(0x1854, 0x0489, "LG gram 16 (16Z90R-A)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
SND_PCI_QUIRK(0x1854, 0x048a, "LG gram 17 (17ZD90R)", ALC298_FIXUP_SAMSUNG_AMP_V2_4_AMPS),
+ SND_PCI_QUIRK(0x1854, 0x0490, "LG Gram Style 14 (14Z90RS)", ALC298_FIXUP_LG_GRAM_STYLE_14),
SND_PCI_QUIRK(0x19e5, 0x3204, "Huawei MACH-WX9", ALC256_FIXUP_HUAWEI_MACH_WX9_PINS),
SND_PCI_QUIRK(0x19e5, 0x320f, "Huawei WRT-WX9 ", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x19e5, 0x3212, "Huawei KLV-WX9 ", ALC256_FIXUP_ACER_HEADSET_MIC),
diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
index 6d073ecad6f02..f33946fb895da 100644
--- a/sound/soc/amd/yc/acp6x-mach.c
+++ b/sound/soc/amd/yc/acp6x-mach.c
@@ -689,7 +689,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
DMI_MATCH(DMI_BOARD_NAME, "XyloD5_RBU"),
}
},
-
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Vivobook_ASUSLaptop M6501RR_M6501RR"),
+ }
+ },
{}
};

diff --git a/sound/soc/codecs/aw88261.c b/sound/soc/codecs/aw88261.c
index fb99871578c5a..38f362b022aa3 100644
--- a/sound/soc/codecs/aw88261.c
+++ b/sound/soc/codecs/aw88261.c
@@ -423,9 +423,10 @@ static int aw88261_dev_reg_update(struct aw88261 *aw88261,
if (ret)
break;

+ /* keep all three bits from current hw status */
read_val &= (~AW88261_AMPPD_MASK) | (~AW88261_PWDN_MASK) |
(~AW88261_HMUTE_MASK);
- reg_val &= (AW88261_AMPPD_MASK | AW88261_PWDN_MASK | AW88261_HMUTE_MASK);
+ reg_val &= (AW88261_AMPPD_MASK & AW88261_PWDN_MASK & AW88261_HMUTE_MASK);
reg_val |= read_val;

/* enable uls hmute */
diff --git a/sound/soc/codecs/es8328.c b/sound/soc/codecs/es8328.c
index 76159c45e6b52..c0d7ce64b2d96 100644
--- a/sound/soc/codecs/es8328.c
+++ b/sound/soc/codecs/es8328.c
@@ -756,17 +756,23 @@ static int es8328_resume(struct snd_soc_component *component)
es8328->supplies);
if (ret) {
dev_err(component->dev, "unable to enable regulators\n");
- return ret;
+ goto err_clk;
}

regcache_mark_dirty(regmap);
ret = regcache_sync(regmap);
if (ret) {
dev_err(component->dev, "unable to sync regcache\n");
- return ret;
+ goto err_regulators;
}

return 0;
+
+err_regulators:
+ regulator_bulk_disable(ARRAY_SIZE(es8328->supplies), es8328->supplies);
+err_clk:
+ clk_disable_unprepare(es8328->clk);
+ return ret;
}

static int es8328_component_probe(struct snd_soc_component *component)
diff --git a/sound/soc/codecs/max98390.c b/sound/soc/codecs/max98390.c
index 1bae253618fd0..3a2befb3fab39 100644
--- a/sound/soc/codecs/max98390.c
+++ b/sound/soc/codecs/max98390.c
@@ -1075,6 +1075,9 @@ static int max98390_i2c_probe(struct i2c_client *i2c)

reset_gpio = devm_gpiod_get_optional(&i2c->dev,
"reset", GPIOD_OUT_HIGH);
+ if (IS_ERR(reset_gpio))
+ return dev_err_probe(&i2c->dev, PTR_ERR(reset_gpio),
+ "Failed to get reset gpio\n");

/* Power on device */
if (reset_gpio) {
diff --git a/sound/soc/codecs/nau8821.c b/sound/soc/codecs/nau8821.c
index bfb719ca4c2cf..2d040722ab881 100644
--- a/sound/soc/codecs/nau8821.c
+++ b/sound/soc/codecs/nau8821.c
@@ -1059,20 +1059,24 @@ static void nau8821_eject_jack(struct nau8821 *nau8821)
snd_soc_component_disable_pin(component, "MICBIAS");
snd_soc_dapm_sync(dapm);

+ /* Disable & mask both insertion & ejection IRQs */
+ regmap_update_bits(regmap, NAU8821_R12_INTERRUPT_DIS_CTRL,
+ NAU8821_IRQ_INSERT_DIS | NAU8821_IRQ_EJECT_DIS,
+ NAU8821_IRQ_INSERT_DIS | NAU8821_IRQ_EJECT_DIS);
+ regmap_update_bits(regmap, NAU8821_R0F_INTERRUPT_MASK,
+ NAU8821_IRQ_INSERT_EN | NAU8821_IRQ_EJECT_EN,
+ NAU8821_IRQ_INSERT_EN | NAU8821_IRQ_EJECT_EN);
+
/* Clear all interruption status */
nau8821_irq_status_clear(regmap, 0);

- /* Enable the insertion interruption, disable the ejection inter-
- * ruption, and then bypass de-bounce circuit.
- */
+ /* Enable & unmask the insertion IRQ */
regmap_update_bits(regmap, NAU8821_R12_INTERRUPT_DIS_CTRL,
- NAU8821_IRQ_EJECT_DIS | NAU8821_IRQ_INSERT_DIS,
- NAU8821_IRQ_EJECT_DIS);
- /* Mask unneeded IRQs: 1 - disable, 0 - enable */
+ NAU8821_IRQ_INSERT_DIS, 0);
regmap_update_bits(regmap, NAU8821_R0F_INTERRUPT_MASK,
- NAU8821_IRQ_EJECT_EN | NAU8821_IRQ_INSERT_EN,
- NAU8821_IRQ_EJECT_EN);
+ NAU8821_IRQ_INSERT_EN, 0);

+ /* Bypass de-bounce circuit */
regmap_update_bits(regmap, NAU8821_R0D_JACK_DET_CTRL,
NAU8821_JACK_DET_DB_BYPASS, NAU8821_JACK_DET_DB_BYPASS);

@@ -1096,22 +1100,17 @@ static void nau8821_eject_jack(struct nau8821 *nau8821)
NAU8821_IRQ_KEY_RELEASE_DIS |
NAU8821_IRQ_KEY_PRESS_DIS);
}
-
}

static void nau8821_jdet_work(struct work_struct *work)
{
struct nau8821 *nau8821 =
- container_of(work, struct nau8821, jdet_work);
+ container_of(work, struct nau8821, jdet_work.work);
struct snd_soc_dapm_context *dapm = nau8821->dapm;
struct snd_soc_component *component = snd_soc_dapm_to_component(dapm);
struct regmap *regmap = nau8821->regmap;
int jack_status_reg, mic_detected, event = 0, event_mask = 0;

- snd_soc_component_force_enable_pin(component, "MICBIAS");
- snd_soc_dapm_sync(dapm);
- msleep(20);
-
regmap_read(regmap, NAU8821_R58_I2C_DEVICE_ID, &jack_status_reg);
mic_detected = !(jack_status_reg & NAU8821_KEYDET);
if (mic_detected) {
@@ -1144,6 +1143,7 @@ static void nau8821_jdet_work(struct work_struct *work)
snd_soc_component_disable_pin(component, "MICBIAS");
snd_soc_dapm_sync(dapm);
}
+
event_mask |= SND_JACK_HEADSET;
snd_soc_jack_report(nau8821->jack, event, event_mask);
}
@@ -1153,6 +1153,15 @@ static void nau8821_setup_inserted_irq(struct nau8821 *nau8821)
{
struct regmap *regmap = nau8821->regmap;

+ /* Disable & mask insertion IRQ */
+ regmap_update_bits(regmap, NAU8821_R12_INTERRUPT_DIS_CTRL,
+ NAU8821_IRQ_INSERT_DIS, NAU8821_IRQ_INSERT_DIS);
+ regmap_update_bits(regmap, NAU8821_R0F_INTERRUPT_MASK,
+ NAU8821_IRQ_INSERT_EN, NAU8821_IRQ_INSERT_EN);
+
+ /* Clear insert IRQ status */
+ nau8821_irq_status_clear(regmap, NAU8821_JACK_INSERT_DETECTED);
+
/* Enable internal VCO needed for interruptions */
if (nau8821->dapm->bias_level < SND_SOC_BIAS_PREPARE)
nau8821_configure_sysclk(nau8821, NAU8821_CLK_INTERNAL, 0);
@@ -1172,17 +1181,19 @@ static void nau8821_setup_inserted_irq(struct nau8821 *nau8821)
regmap_update_bits(regmap, NAU8821_R0D_JACK_DET_CTRL,
NAU8821_JACK_DET_DB_BYPASS, 0);

+ /* Unmask & enable the ejection IRQs */
regmap_update_bits(regmap, NAU8821_R0F_INTERRUPT_MASK,
- NAU8821_IRQ_EJECT_EN, 0);
+ NAU8821_IRQ_EJECT_EN, 0);
regmap_update_bits(regmap, NAU8821_R12_INTERRUPT_DIS_CTRL,
- NAU8821_IRQ_EJECT_DIS, 0);
+ NAU8821_IRQ_EJECT_DIS, 0);
}

static irqreturn_t nau8821_interrupt(int irq, void *data)
{
struct nau8821 *nau8821 = (struct nau8821 *)data;
struct regmap *regmap = nau8821->regmap;
- int active_irq, clear_irq = 0, event = 0, event_mask = 0;
+ struct snd_soc_component *component;
+ int active_irq, event = 0, event_mask = 0;

if (regmap_read(regmap, NAU8821_R10_IRQ_STATUS, &active_irq)) {
dev_err(nau8821->dev, "failed to read irq status\n");
@@ -1193,49 +1204,41 @@ static irqreturn_t nau8821_interrupt(int irq, void *data)

if ((active_irq & NAU8821_JACK_EJECT_IRQ_MASK) ==
NAU8821_JACK_EJECT_DETECTED) {
- cancel_work_sync(&nau8821->jdet_work);
+ cancel_delayed_work_sync(&nau8821->jdet_work);
regmap_update_bits(regmap, NAU8821_R71_ANALOG_ADC_1,
NAU8821_MICDET_MASK, NAU8821_MICDET_DIS);
nau8821_eject_jack(nau8821);
event_mask |= SND_JACK_HEADSET;
- clear_irq = NAU8821_JACK_EJECT_IRQ_MASK;
} else if (active_irq & NAU8821_KEY_SHORT_PRESS_IRQ) {
event |= NAU8821_BUTTON;
event_mask |= NAU8821_BUTTON;
- clear_irq = NAU8821_KEY_SHORT_PRESS_IRQ;
+ nau8821_irq_status_clear(regmap, NAU8821_KEY_SHORT_PRESS_IRQ);
} else if (active_irq & NAU8821_KEY_RELEASE_IRQ) {
event_mask = NAU8821_BUTTON;
- clear_irq = NAU8821_KEY_RELEASE_IRQ;
+ nau8821_irq_status_clear(regmap, NAU8821_KEY_RELEASE_IRQ);
} else if ((active_irq & NAU8821_JACK_INSERT_IRQ_MASK) ==
NAU8821_JACK_INSERT_DETECTED) {
- cancel_work_sync(&nau8821->jdet_work);
+ cancel_delayed_work_sync(&nau8821->jdet_work);
regmap_update_bits(regmap, NAU8821_R71_ANALOG_ADC_1,
NAU8821_MICDET_MASK, NAU8821_MICDET_EN);
if (nau8821_is_jack_inserted(regmap)) {
- /* detect microphone and jack type */
- schedule_work(&nau8821->jdet_work);
+ /* Detect microphone and jack type */
+ component = snd_soc_dapm_to_component(nau8821->dapm);
+ snd_soc_component_force_enable_pin(component, "MICBIAS");
+ snd_soc_dapm_sync(nau8821->dapm);
+ schedule_delayed_work(&nau8821->jdet_work, msecs_to_jiffies(20));
/* Turn off insertion interruption at manual mode */
- regmap_update_bits(regmap,
- NAU8821_R12_INTERRUPT_DIS_CTRL,
- NAU8821_IRQ_INSERT_DIS,
- NAU8821_IRQ_INSERT_DIS);
- regmap_update_bits(regmap,
- NAU8821_R0F_INTERRUPT_MASK,
- NAU8821_IRQ_INSERT_EN,
- NAU8821_IRQ_INSERT_EN);
nau8821_setup_inserted_irq(nau8821);
} else {
dev_warn(nau8821->dev,
"Inserted IRQ fired but not connected\n");
nau8821_eject_jack(nau8821);
}
+ } else {
+ /* Clear the rightmost interrupt */
+ nau8821_irq_status_clear(regmap, active_irq);
}

- if (!clear_irq)
- clear_irq = active_irq;
- /* clears the rightmost interruption */
- regmap_write(regmap, NAU8821_R11_INT_CLR_KEY_STATUS, clear_irq);
-
if (event_mask)
snd_soc_jack_report(nau8821->jack, event, event_mask);

@@ -1659,8 +1662,14 @@ int nau8821_enable_jack_detect(struct snd_soc_component *component,
int ret;

nau8821->jack = jack;
+
+ if (nau8821->jdet_active)
+ return 0;
+
/* Initiate jack detection work queue */
- INIT_WORK(&nau8821->jdet_work, nau8821_jdet_work);
+ INIT_DELAYED_WORK(&nau8821->jdet_work, nau8821_jdet_work);
+ nau8821->jdet_active = true;
+
ret = devm_request_threaded_irq(nau8821->dev, nau8821->irq, NULL,
nau8821_interrupt, IRQF_TRIGGER_LOW | IRQF_ONESHOT,
"nau8821", nau8821);
diff --git a/sound/soc/codecs/nau8821.h b/sound/soc/codecs/nau8821.h
index f0935ffafcbec..f9d7cd8cbd211 100644
--- a/sound/soc/codecs/nau8821.h
+++ b/sound/soc/codecs/nau8821.h
@@ -561,7 +561,8 @@ struct nau8821 {
struct regmap *regmap;
struct snd_soc_dapm_context *dapm;
struct snd_soc_jack *jack;
- struct work_struct jdet_work;
+ struct delayed_work jdet_work;
+ bool jdet_active;
int irq;
int clk_id;
int micbias_voltage;
diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c
index 08d164ce3e49c..95143472c9e4f 100644
--- a/sound/soc/codecs/wm8962.c
+++ b/sound/soc/codecs/wm8962.c
@@ -67,6 +67,8 @@ struct wm8962_priv {
struct mutex dsp2_ena_lock;
u16 dsp2_ena;

+ int mic_status;
+
struct delayed_work mic_work;
struct snd_soc_jack *jack;

@@ -1759,7 +1761,7 @@ SND_SOC_BYTES("EQR Coefficients", WM8962_EQ24, 18),


SOC_SINGLE("3D Switch", WM8962_THREED1, 0, 1, 0),
-SND_SOC_BYTES_MASK("3D Coefficients", WM8962_THREED1, 4, WM8962_THREED_ENA),
+SND_SOC_BYTES_MASK("3D Coefficients", WM8962_THREED1, 4, WM8962_THREED_ENA | WM8962_ADC_MONOMIX),

SOC_SINGLE("DF1 Switch", WM8962_DF1, 0, 1, 0),
SND_SOC_BYTES_MASK("DF1 Coefficients", WM8962_DF1, 7, WM8962_DF1_ENA),
@@ -3073,8 +3075,16 @@ static void wm8962_mic_work(struct work_struct *work)
if (reg & WM8962_MICSHORT_STS) {
status |= SND_JACK_BTN_0;
irq_pol |= WM8962_MICSCD_IRQ_POL;
+
+ /* Don't report a microphone if it's shorted right after
+ * plugging in, as this may be a TRS plug in a TRRS socket.
+ */
+ if (!(wm8962->mic_status & WM8962_MICDET_STS))
+ status = 0;
}

+ wm8962->mic_status = status;
+
snd_soc_jack_report(wm8962->jack, status,
SND_JACK_MICROPHONE | SND_JACK_BTN_0);

diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
index d6f00920391d1..656a4d619cdf1 100644
--- a/sound/soc/fsl/fsl_xcvr.c
+++ b/sound/soc/fsl/fsl_xcvr.c
@@ -216,13 +216,10 @@ static int fsl_xcvr_mode_put(struct snd_kcontrol *kcontrol,

xcvr->mode = snd_soc_enum_item_to_val(e, item[0]);

- down_read(&card->snd_card->controls_rwsem);
fsl_xcvr_activate_ctl(dai, fsl_xcvr_arc_mode_kctl.name,
(xcvr->mode == FSL_XCVR_MODE_ARC));
fsl_xcvr_activate_ctl(dai, fsl_xcvr_earc_capds_kctl.name,
(xcvr->mode == FSL_XCVR_MODE_EARC));
- up_read(&card->snd_card->controls_rwsem);
-
/* Allow playback for SPDIF only */
rtd = snd_soc_get_pcm_runtime(card, card->dai_link);
rtd->pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream_count =
diff --git a/sound/soc/fsl/imx-rpmsg.c b/sound/soc/fsl/imx-rpmsg.c
index ce98d2288193b..b4d6ad563445f 100644
--- a/sound/soc/fsl/imx-rpmsg.c
+++ b/sound/soc/fsl/imx-rpmsg.c
@@ -145,7 +145,7 @@ static int imx_rpmsg_probe(struct platform_device *pdev)
data->dai.ignore_pmdown_time = 1;

data->dai.cpus->dai_name = pdev->dev.platform_data;
- cpu_dai = snd_soc_find_dai(data->dai.cpus);
+ cpu_dai = snd_soc_find_dai_with_mutex(data->dai.cpus);
if (!cpu_dai) {
ret = -EPROBE_DEFER;
goto fail;
diff --git a/sound/soc/intel/common/soc-acpi-intel-arl-match.c b/sound/soc/intel/common/soc-acpi-intel-arl-match.c
index 1ad704ca2c5f2..36d73623fb161 100644
--- a/sound/soc/intel/common/soc-acpi-intel-arl-match.c
+++ b/sound/soc/intel/common/soc-acpi-intel-arl-match.c
@@ -45,23 +45,22 @@ static const struct snd_soc_acpi_endpoint spk_3_endpoint = {
.group_id = 1,
};

-/*
- * RT722 is a multi-function codec, three endpoints are created for
- * its headset, amp and dmic functions.
- */
-static const struct snd_soc_acpi_endpoint rt722_endpoints[] = {
+static const struct snd_soc_acpi_endpoint jack_amp_g1_dmic_endpoints[] = {
+ /* Jack Endpoint */
{
.num = 0,
.aggregated = 0,
.group_position = 0,
.group_id = 0,
},
+ /* Amp Endpoint, work as spk_l_endpoint */
{
.num = 1,
- .aggregated = 0,
+ .aggregated = 1,
.group_position = 0,
- .group_id = 0,
+ .group_id = 1,
},
+ /* DMIC Endpoint */
{
.num = 2,
.aggregated = 0,
@@ -229,11 +228,11 @@ static const struct snd_soc_acpi_adr_device rt711_sdca_0_adr[] = {
}
};

-static const struct snd_soc_acpi_adr_device rt722_0_single_adr[] = {
+static const struct snd_soc_acpi_adr_device rt722_0_agg_adr[] = {
{
.adr = 0x000030025D072201ull,
- .num_endpoints = ARRAY_SIZE(rt722_endpoints),
- .endpoints = rt722_endpoints,
+ .num_endpoints = ARRAY_SIZE(jack_amp_g1_dmic_endpoints),
+ .endpoints = jack_amp_g1_dmic_endpoints,
.name_prefix = "rt722"
}
};
@@ -371,8 +370,8 @@ static const struct snd_soc_acpi_link_adr arl_sdca_rvp[] = {
static const struct snd_soc_acpi_link_adr arl_rt722_l0_rt1320_l2[] = {
{
.mask = BIT(0),
- .num_adr = ARRAY_SIZE(rt722_0_single_adr),
- .adr_d = rt722_0_single_adr,
+ .num_adr = ARRAY_SIZE(rt722_0_agg_adr),
+ .adr_d = rt722_0_agg_adr,
},
{
.mask = BIT(2),
diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c
index 7feefeb6b876d..46c338ddefe9f 100644
--- a/sound/soc/rockchip/rockchip_i2s_tdm.c
+++ b/sound/soc/rockchip/rockchip_i2s_tdm.c
@@ -22,6 +22,7 @@

#define DRV_NAME "rockchip-i2s-tdm"

+#define DEFAULT_MCLK_FS 256
#define CH_GRP_MAX 4 /* The max channel 8 / 2 */
#define MULTIPLEX_CH_MAX 10

@@ -693,6 +694,15 @@ static int rockchip_i2s_tdm_hw_params(struct snd_pcm_substream *substream,
mclk_rate = i2s_tdm->mclk_rx_freq;
}

+ /*
+ * When the dai/component driver doesn't need to set mclk-fs for a specific
+ * clock, it can skip the call to set_sysclk() for that clock.
+ * In that case, simply use the clock rate from the params and multiply it by
+ * the default mclk-fs value.
+ */
+ if (!mclk_rate)
+ mclk_rate = DEFAULT_MCLK_FS * params_rate(params);
+
err = clk_set_rate(mclk, mclk_rate);
if (err)
return err;
diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
index 2e58a264da556..169bfd23f1b93 100644
--- a/sound/soc/sof/intel/hda-dai.c
+++ b/sound/soc/sof/intel/hda-dai.c
@@ -70,12 +70,22 @@ static const struct hda_dai_widget_dma_ops *
hda_dai_get_ops(struct snd_pcm_substream *substream, struct snd_soc_dai *cpu_dai)
{
struct snd_soc_dapm_widget *w = snd_soc_dai_get_widget(cpu_dai, substream->stream);
- struct snd_sof_widget *swidget = w->dobj.private;
+ struct snd_sof_widget *swidget;
struct snd_sof_dev *sdev;
struct snd_sof_dai *sdai;

- sdev = widget_to_sdev(w);
+ /*
+ * this is unlikely if the topology and the machine driver DAI links match.
+ * But if there's a missing DAI link in topology, this will prevent a NULL pointer
+ * dereference later on.
+ */
+ if (!w) {
+ dev_err(cpu_dai->dev, "%s: widget is NULL\n", __func__);
+ return NULL;
+ }

+ sdev = widget_to_sdev(w);
+ swidget = w->dobj.private;
if (!swidget) {
dev_err(sdev->dev, "%s: swidget is NULL\n", __func__);
return NULL;
diff --git a/sound/soc/sof/ipc4-control.c b/sound/soc/sof/ipc4-control.c
index 976a4794d6100..453ed1643b89c 100644
--- a/sound/soc/sof/ipc4-control.c
+++ b/sound/soc/sof/ipc4-control.c
@@ -66,7 +66,7 @@ static int sof_ipc4_set_get_kcontrol_data(struct snd_sof_control *scontrol,
* configuration
*/
memcpy(scontrol->ipc_control_data, scontrol->old_ipc_control_data,
- scontrol->max_size);
+ scontrol->size);
kfree(scontrol->old_ipc_control_data);
scontrol->old_ipc_control_data = NULL;
/* Send the last known good configuration to firmware */
@@ -412,19 +412,35 @@ static int sof_ipc4_set_get_bytes_data(struct snd_sof_dev *sdev,
int ret = 0;

/* Send the new data to the firmware only if it is powered up */
- if (set && !pm_runtime_active(sdev->dev))
- return 0;
+ if (set) {
+ if (!pm_runtime_active(sdev->dev))
+ return 0;
+
+ if (!data->size) {
+ dev_dbg(sdev->dev, "%s: No data to be sent.\n",
+ scontrol->name);
+ return 0;
+ }
+ }

msg->extension = SOF_IPC4_MOD_EXT_MSG_PARAM_ID(data->type);

msg->data_ptr = data->data;
- msg->data_size = data->size;
+ if (set)
+ msg->data_size = data->size;
+ else
+ msg->data_size = scontrol->max_size - sizeof(*data);

ret = sof_ipc4_set_get_kcontrol_data(scontrol, set, lock);
- if (ret < 0)
+ if (ret < 0) {
dev_err(sdev->dev, "Failed to %s for %s\n",
set ? "set bytes update" : "get bytes",
scontrol->name);
+ } else if (!set) {
+ /* Update the sizes according to the received payload data */
+ data->size = msg->data_size;
+ scontrol->size = sizeof(*cdata) + sizeof(*data) + data->size;
+ }

msg->data_ptr = NULL;
msg->data_size = 0;
@@ -440,6 +456,7 @@ static int sof_ipc4_bytes_put(struct snd_sof_control *scontrol,
struct snd_sof_dev *sdev = snd_soc_component_get_drvdata(scomp);
struct sof_abi_hdr *data = cdata->data;
size_t size;
+ int ret;

if (scontrol->max_size > sizeof(ucontrol->value.bytes.data)) {
dev_err_ratelimited(scomp->dev,
@@ -461,9 +478,12 @@ static int sof_ipc4_bytes_put(struct snd_sof_control *scontrol,
/* copy from kcontrol */
memcpy(data, ucontrol->value.bytes.data, size);

- sof_ipc4_set_get_bytes_data(sdev, scontrol, true, true);
+ ret = sof_ipc4_set_get_bytes_data(sdev, scontrol, true, true);
+ if (!ret)
+ /* Update the cdata size */
+ scontrol->size = sizeof(*cdata) + size;

- return 0;
+ return ret;
}

static int sof_ipc4_bytes_get(struct snd_sof_control *scontrol,
@@ -559,7 +579,7 @@ static int sof_ipc4_bytes_ext_put(struct snd_sof_control *scontrol,
if (!scontrol->old_ipc_control_data) {
/* Create a backup of the current, valid bytes control */
scontrol->old_ipc_control_data = kmemdup(scontrol->ipc_control_data,
- scontrol->max_size, GFP_KERNEL);
+ scontrol->size, GFP_KERNEL);
if (!scontrol->old_ipc_control_data)
return -ENOMEM;
}
@@ -567,12 +587,15 @@ static int sof_ipc4_bytes_ext_put(struct snd_sof_control *scontrol,
/* Copy the whole binary data which includes the ABI header and the payload */
if (copy_from_user(data, tlvd->tlv, header.length)) {
memcpy(scontrol->ipc_control_data, scontrol->old_ipc_control_data,
- scontrol->max_size);
+ scontrol->size);
kfree(scontrol->old_ipc_control_data);
scontrol->old_ipc_control_data = NULL;
return -EFAULT;
}

+ /* Update the cdata size */
+ scontrol->size = sizeof(*cdata) + header.length;
+
return sof_ipc4_set_get_bytes_data(sdev, scontrol, true, true);
}

diff --git a/sound/soc/sof/ipc4-topology.c b/sound/soc/sof/ipc4-topology.c
index 4849a3ce02dca..9208cbe95e3c9 100644
--- a/sound/soc/sof/ipc4-topology.c
+++ b/sound/soc/sof/ipc4-topology.c
@@ -2508,22 +2508,41 @@ static int sof_ipc4_control_load_bytes(struct snd_sof_dev *sdev, struct snd_sof_
struct sof_ipc4_msg *msg;
int ret;

- if (scontrol->max_size < (sizeof(*control_data) + sizeof(struct sof_abi_hdr))) {
- dev_err(sdev->dev, "insufficient size for a bytes control %s: %zu.\n",
+ /*
+ * The max_size is coming from topology and indicates the maximum size
+ * of sof_abi_hdr plus the payload, which excludes the local only
+ * 'struct sof_ipc4_control_data'
+ */
+ if (scontrol->max_size < sizeof(struct sof_abi_hdr)) {
+ dev_err(sdev->dev,
+ "insufficient maximum size for a bytes control %s: %zu.\n",
scontrol->name, scontrol->max_size);
return -EINVAL;
}

- if (scontrol->priv_size > scontrol->max_size - sizeof(*control_data)) {
- dev_err(sdev->dev, "scontrol %s bytes data size %zu exceeds max %zu.\n",
- scontrol->name, scontrol->priv_size,
- scontrol->max_size - sizeof(*control_data));
+ if (scontrol->priv_size > scontrol->max_size) {
+ dev_err(sdev->dev,
+ "bytes control %s initial data size %zu exceeds max %zu.\n",
+ scontrol->name, scontrol->priv_size, scontrol->max_size);
+ return -EINVAL;
+ }
+
+ if (scontrol->priv_size < sizeof(struct sof_abi_hdr)) {
+ dev_err(sdev->dev,
+ "bytes control %s initial data size %zu is insufficient.\n",
+ scontrol->name, scontrol->priv_size);
return -EINVAL;
}

- scontrol->size = sizeof(struct sof_ipc4_control_data) + scontrol->priv_size;
+ /*
+ * The used size behind the cdata pointer, which can be smaller than
+ * the maximum size
+ */
+ scontrol->size = sizeof(*control_data) + scontrol->priv_size;

- scontrol->ipc_control_data = kzalloc(scontrol->max_size, GFP_KERNEL);
+ /* Allocate the cdata: local struct size + maximum payload size */
+ scontrol->ipc_control_data = kzalloc(sizeof(*control_data) + scontrol->max_size,
+ GFP_KERNEL);
if (!scontrol->ipc_control_data)
return -ENOMEM;

diff --git a/sound/soc/sof/ipc4.c b/sound/soc/sof/ipc4.c
index 4386cbae16d4e..e4eb8e004885e 100644
--- a/sound/soc/sof/ipc4.c
+++ b/sound/soc/sof/ipc4.c
@@ -15,6 +15,7 @@
#include "sof-audio.h"
#include "ipc4-fw-reg.h"
#include "ipc4-priv.h"
+#include "ipc4-topology.h"
#include "ipc4-telemetry.h"
#include "ops.h"

@@ -408,6 +409,23 @@ static int sof_ipc4_tx_msg(struct snd_sof_dev *sdev, void *msg_data, size_t msg_
return ret;
}

+static bool sof_ipc4_tx_payload_for_get_data(struct sof_ipc4_msg *tx)
+{
+ /*
+ * Messages that require TX payload with LARGE_CONFIG_GET.
+ * The TX payload is placed into the IPC message data section by caller,
+ * which needs to be copied to temporary buffer since the received data
+ * will overwrite it.
+ */
+ switch (tx->extension & SOF_IPC4_MOD_EXT_MSG_PARAM_ID_MASK) {
+ case SOF_IPC4_MOD_EXT_MSG_PARAM_ID(SOF_IPC4_SWITCH_CONTROL_PARAM_ID):
+ case SOF_IPC4_MOD_EXT_MSG_PARAM_ID(SOF_IPC4_ENUM_CONTROL_PARAM_ID):
+ return true;
+ default:
+ return false;
+ }
+}
+
static int sof_ipc4_set_get_data(struct snd_sof_dev *sdev, void *data,
size_t payload_bytes, bool set)
{
@@ -419,6 +437,8 @@ static int sof_ipc4_set_get_data(struct snd_sof_dev *sdev, void *data,
struct sof_ipc4_msg tx = {{ 0 }};
struct sof_ipc4_msg rx = {{ 0 }};
size_t remaining = payload_bytes;
+ void *tx_payload_for_get = NULL;
+ size_t tx_data_size = 0;
size_t offset = 0;
size_t chunk_size;
int ret;
@@ -444,10 +464,20 @@ static int sof_ipc4_set_get_data(struct snd_sof_dev *sdev, void *data,

tx.extension |= SOF_IPC4_MOD_EXT_MSG_FIRST_BLOCK(1);

+ if (sof_ipc4_tx_payload_for_get_data(&tx)) {
+ tx_data_size = min(ipc4_msg->data_size, payload_limit);
+ tx_payload_for_get = kmemdup(ipc4_msg->data_ptr, tx_data_size,
+ GFP_KERNEL);
+ if (!tx_payload_for_get)
+ return -ENOMEM;
+ }
+
/* ensure the DSP is in D0i0 before sending IPC */
ret = snd_sof_dsp_set_power_state(sdev, &target_state);
- if (ret < 0)
+ if (ret < 0) {
+ kfree(tx_payload_for_get);
return ret;
+ }

/* Serialise IPC TX */
mutex_lock(&sdev->ipc->tx_mutex);
@@ -481,7 +511,15 @@ static int sof_ipc4_set_get_data(struct snd_sof_dev *sdev, void *data,
rx.data_size = chunk_size;
rx.data_ptr = ipc4_msg->data_ptr + offset;

- tx_size = 0;
+ if (tx_payload_for_get) {
+ tx_size = tx_data_size;
+ tx.data_size = tx_size;
+ tx.data_ptr = tx_payload_for_get;
+ } else {
+ tx_size = 0;
+ tx.data_size = 0;
+ tx.data_ptr = NULL;
+ }
rx_size = chunk_size;
}

@@ -528,6 +566,8 @@ static int sof_ipc4_set_get_data(struct snd_sof_dev *sdev, void *data,

mutex_unlock(&sdev->ipc->tx_mutex);

+ kfree(tx_payload_for_get);
+
return ret;
}

diff --git a/sound/soc/sunxi/sun50i-dmic.c b/sound/soc/sunxi/sun50i-dmic.c
index 3e751b5694fe3..e8e03a5e35bc7 100644
--- a/sound/soc/sunxi/sun50i-dmic.c
+++ b/sound/soc/sunxi/sun50i-dmic.c
@@ -358,6 +358,9 @@ static int sun50i_dmic_probe(struct platform_device *pdev)

host->regmap = devm_regmap_init_mmio(&pdev->dev, base,
&sun50i_dmic_regmap_config);
+ if (IS_ERR(host->regmap))
+ return dev_err_probe(&pdev->dev, PTR_ERR(host->regmap),
+ "failed to initialise regmap\n");

/* Clocks */
host->bus_clk = devm_clk_get(&pdev->dev, "bus");
diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
index aa201e4744bf6..cb94c2cad2213 100644
--- a/sound/usb/endpoint.c
+++ b/sound/usb/endpoint.c
@@ -278,8 +278,8 @@ static inline bool has_tx_length_quirk(struct snd_usb_audio *chip)
return chip->quirk_flags & QUIRK_FLAG_TX_LENGTH;
}

-static void prepare_silent_urb(struct snd_usb_endpoint *ep,
- struct snd_urb_ctx *ctx)
+static int prepare_silent_urb(struct snd_usb_endpoint *ep,
+ struct snd_urb_ctx *ctx)
{
struct urb *urb = ctx->urb;
unsigned int offs = 0;
@@ -292,28 +292,34 @@ static void prepare_silent_urb(struct snd_usb_endpoint *ep,
extra = sizeof(packet_length);

for (i = 0; i < ctx->packets; ++i) {
- unsigned int offset;
- unsigned int length;
- int counts;
-
- counts = snd_usb_endpoint_next_packet_size(ep, ctx, i, 0);
- length = counts * ep->stride; /* number of silent bytes */
- offset = offs * ep->stride + extra * i;
- urb->iso_frame_desc[i].offset = offset;
+ int length;
+
+ length = snd_usb_endpoint_next_packet_size(ep, ctx, i, 0);
+ if (length < 0)
+ return length;
+ length *= ep->stride; /* number of silent bytes */
+ if (offs + length + extra > ctx->buffer_size)
+ break;
+ urb->iso_frame_desc[i].offset = offs;
urb->iso_frame_desc[i].length = length + extra;
if (extra) {
packet_length = cpu_to_le32(length);
- memcpy(urb->transfer_buffer + offset,
+ memcpy(urb->transfer_buffer + offs,
&packet_length, sizeof(packet_length));
+ offs += extra;
}
- memset(urb->transfer_buffer + offset + extra,
+ memset(urb->transfer_buffer + offs,
ep->silence_value, length);
- offs += counts;
+ offs += length;
}

- urb->number_of_packets = ctx->packets;
- urb->transfer_buffer_length = offs * ep->stride + ctx->packets * extra;
+ if (!offs)
+ return -EPIPE;
+
+ urb->number_of_packets = i;
+ urb->transfer_buffer_length = offs;
ctx->queued = 0;
+ return 0;
}

/*
@@ -335,8 +341,7 @@ static int prepare_outbound_urb(struct snd_usb_endpoint *ep,
if (data_subs && ep->prepare_data_urb)
return ep->prepare_data_urb(data_subs, urb, in_stream_lock);
/* no data provider, so send silence */
- prepare_silent_urb(ep, ctx);
- break;
+ return prepare_silent_urb(ep, ctx);

case SND_USB_ENDPOINT_TYPE_SYNC:
if (snd_usb_get_speed(ep->chip->dev) >= USB_SPEED_HIGH) {
@@ -489,6 +494,7 @@ int snd_usb_queue_pending_output_urbs(struct snd_usb_endpoint *ep,

/* copy over the length information */
if (implicit_fb) {
+ ctx->packets = packet->packets;
for (i = 0; i < packet->packets; i++)
ctx->packet_size[i] = packet->packet_size[i];
}
diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
index 8fa840df46210..947467112409a 100644
--- a/sound/usb/quirks.c
+++ b/sound/usb/quirks.c
@@ -2147,6 +2147,8 @@ struct usb_audio_quirk_flags_table {

static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
/* Device matches */
+ DEVICE_FLG(0x001f, 0x0b21, /* AB13X USB Audio */
+ QUIRK_FLAG_FORCE_IFACE_RESET | QUIRK_FLAG_IFACE_DELAY),
DEVICE_FLG(0x03f0, 0x654a, /* HP 320 FHD Webcam */
QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
DEVICE_FLG(0x041e, 0x3000, /* Creative SB Extigy */
diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
index 39f208928cdb5..587403a19af3a 100644
--- a/tools/bpf/bpftool/net.c
+++ b/tools/bpf/bpftool/net.c
@@ -156,7 +156,7 @@ static int netlink_recv(int sock, __u32 nl_pid, __u32 seq,
bool multipart = true;
struct nlmsgerr *err;
struct nlmsghdr *nh;
- char buf[4096];
+ char buf[8192];
int len, ret;

while (multipart) {
@@ -201,6 +201,9 @@ static int netlink_recv(int sock, __u32 nl_pid, __u32 seq,
return ret;
}
}
+
+ if (len)
+ p_err("Invalid message or trailing data in Netlink response: %d bytes left", len);
}
ret = 0;
done:
diff --git a/tools/include/linux/bitfield.h b/tools/include/linux/bitfield.h
index 6093fa6db2600..ddf81f24956ba 100644
--- a/tools/include/linux/bitfield.h
+++ b/tools/include/linux/bitfield.h
@@ -8,6 +8,7 @@
#define _LINUX_BITFIELD_H

#include <linux/build_bug.h>
+#include <linux/kernel.h>
#include <asm/byteorder.h>

/*
diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
index 12306b5de3efb..a833d91886f87 100644
--- a/tools/lib/bpf/btf_dump.c
+++ b/tools/lib/bpf/btf_dump.c
@@ -1758,9 +1758,18 @@ static int btf_dump_get_bitfield_value(struct btf_dump *d,
__u16 left_shift_bits, right_shift_bits;
const __u8 *bytes = data;
__u8 nr_copy_bits;
+ __u8 start_bit, nr_bytes;
__u64 num = 0;
int i;

+ /* Calculate how many bytes cover the bitfield */
+ start_bit = bits_offset % 8;
+ nr_bytes = (start_bit + bit_sz + 7) / 8;
+
+ /* Bound check */
+ if (data + nr_bytes > d->typed_dump->data_end)
+ return -E2BIG;
+
/* Maximum supported bitfield size is 64 bits */
if (t->size > 8) {
pr_warn("unexpected bitfield size %d\n", t->size);
diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
index 68a2def171751..6f16c4f7b3a43 100644
--- a/tools/lib/bpf/netlink.c
+++ b/tools/lib/bpf/netlink.c
@@ -143,7 +143,7 @@ static int libbpf_netlink_recv(int sock, __u32 nl_pid, int seq,
struct nlmsghdr *nh;
int len, ret;

- ret = alloc_iov(&iov, 4096);
+ ret = alloc_iov(&iov, 8192);
if (ret)
goto done;

@@ -212,6 +212,8 @@ static int libbpf_netlink_recv(int sock, __u32 nl_pid, int seq,
}
}
}
+ if (len)
+ pr_warn("Invalid message or trailing data in Netlink response: %d bytes left\n", len);
}
ret = 0;
done:
diff --git a/tools/lib/perf/Makefile b/tools/lib/perf/Makefile
index 3a9b2140aa048..703a8ff0b3430 100644
--- a/tools/lib/perf/Makefile
+++ b/tools/lib/perf/Makefile
@@ -54,13 +54,6 @@ endif

TEST_ARGS := $(if $(V),-v)

-# Set compile option CFLAGS
-ifdef EXTRA_CFLAGS
- CFLAGS := $(EXTRA_CFLAGS)
-else
- CFLAGS := -g -Wall
-endif
-
INCLUDES = \
-I$(srctree)/tools/lib/perf/include \
-I$(srctree)/tools/lib/ \
@@ -70,11 +63,12 @@ INCLUDES = \
-I$(srctree)/tools/include/uapi

# Append required CFLAGS
-override CFLAGS += $(EXTRA_WARNINGS)
-override CFLAGS += -Werror -Wall
+override CFLAGS := $(INCLUDES) $(CFLAGS)
+override CFLAGS += -g -Werror -Wall
override CFLAGS += -fPIC
-override CFLAGS += $(INCLUDES)
override CFLAGS += -fvisibility=hidden
+override CFLAGS += $(EXTRA_WARNINGS)
+override CFLAGS += $(EXTRA_CFLAGS)

all:

diff --git a/tools/lib/subcmd/help.c b/tools/lib/subcmd/help.c
index ddaeb4eb3e249..db94aa685b73b 100644
--- a/tools/lib/subcmd/help.c
+++ b/tools/lib/subcmd/help.c
@@ -97,11 +97,13 @@ void exclude_cmds(struct cmdnames *cmds, struct cmdnames *excludes)
ei++;
}
}
- if (ci != cj) {
- while (ci < cmds->cnt) {
- cmds->names[cj++] = cmds->names[ci];
- cmds->names[ci++] = NULL;
+ while (ci < cmds->cnt) {
+ if (ci != cj) {
+ cmds->names[cj] = cmds->names[ci];
+ cmds->names[ci] = NULL;
}
+ ci++;
+ cj++;
}
for (ci = cj; ci < cmds->cnt; ci++)
assert(cmds->names[ci] == NULL);
diff --git a/tools/net/sunrpc/xdrgen/generators/__init__.py b/tools/net/sunrpc/xdrgen/generators/__init__.py
index fd24574612742..49191cd10ab70 100644
--- a/tools/net/sunrpc/xdrgen/generators/__init__.py
+++ b/tools/net/sunrpc/xdrgen/generators/__init__.py
@@ -6,7 +6,7 @@ import sys
from jinja2 import Environment, FileSystemLoader, Template

from xdr_ast import _XdrAst, Specification, _RpcProgram, _XdrTypeSpecifier
-from xdr_ast import public_apis, pass_by_reference, get_header_name
+from xdr_ast import public_apis, pass_by_reference, structs, get_header_name
from xdr_parse import get_xdr_annotate


@@ -22,6 +22,7 @@ def create_jinja2_environment(language: str, xdr_type: str) -> Environment:
environment.globals["annotate"] = get_xdr_annotate()
environment.globals["public_apis"] = public_apis
environment.globals["pass_by_reference"] = pass_by_reference
+ environment.globals["structs"] = structs
return environment
case _:
raise NotImplementedError("Language not supported")
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/decoder/argument.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/decoder/argument.j2
index 0b1709cca0d4a..19b219dd276d3 100644
--- a/tools/net/sunrpc/xdrgen/templates/C/program/decoder/argument.j2
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/decoder/argument.j2
@@ -14,7 +14,11 @@ bool {{ program }}_svc_decode_{{ argument }}(struct svc_rqst *rqstp, struct xdr_
{% if argument == 'void' %}
return xdrgen_decode_void(xdr);
{% else %}
+{% if argument in structs %}
struct {{ argument }} *argp = rqstp->rq_argp;
+{% else %}
+ {{ argument }} *argp = rqstp->rq_argp;
+{% endif %}

return xdrgen_decode_{{ argument }}(xdr, argp);
{% endif %}
diff --git a/tools/net/sunrpc/xdrgen/templates/C/program/encoder/result.j2 b/tools/net/sunrpc/xdrgen/templates/C/program/encoder/result.j2
index 6fc61a5d47b7f..746592cfda562 100644
--- a/tools/net/sunrpc/xdrgen/templates/C/program/encoder/result.j2
+++ b/tools/net/sunrpc/xdrgen/templates/C/program/encoder/result.j2
@@ -14,8 +14,14 @@ bool {{ program }}_svc_encode_{{ result }}(struct svc_rqst *rqstp, struct xdr_st
{% if result == 'void' %}
return xdrgen_encode_void(xdr);
{% else %}
+{% if result in structs %}
struct {{ result }} *resp = rqstp->rq_resp;

return xdrgen_encode_{{ result }}(xdr, resp);
+{% else %}
+ {{ result }} *resp = rqstp->rq_resp;
+
+ return xdrgen_encode_{{ result }}(xdr, *resp);
+{% endif %}
{% endif %}
}
diff --git a/tools/objtool/Makefile b/tools/objtool/Makefile
index bf7f7f84ac625..02d1fccd495f4 100644
--- a/tools/objtool/Makefile
+++ b/tools/objtool/Makefile
@@ -7,6 +7,8 @@ srctree := $(patsubst %/,%,$(dir $(CURDIR)))
srctree := $(patsubst %/,%,$(dir $(srctree)))
endif

+RM ?= rm -f
+
LIBSUBCMD_DIR = $(srctree)/tools/lib/subcmd/
ifneq ($(OUTPUT),)
LIBSUBCMD_OUTPUT = $(abspath $(OUTPUT))/libsubcmd
diff --git a/tools/perf/pmu-events/arch/x86/amdzen5/load-store.json b/tools/perf/pmu-events/arch/x86/amdzen5/load-store.json
index af2fdf1f55d64..917f9d68f85a6 100644
--- a/tools/perf/pmu-events/arch/x86/amdzen5/load-store.json
+++ b/tools/perf/pmu-events/arch/x86/amdzen5/load-store.json
@@ -70,19 +70,19 @@
"EventName": "ls_mab_alloc.load_store_allocations",
"EventCode": "0x41",
"BriefDescription": "Miss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for load-store allocations.",
- "UMask": "0x3f"
+ "UMask": "0x07"
},
{
"EventName": "ls_mab_alloc.hardware_prefetcher_allocations",
"EventCode": "0x41",
"BriefDescription": "Miss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for hardware prefetcher allocations.",
- "UMask": "0x40"
+ "UMask": "0x08"
},
{
"EventName": "ls_mab_alloc.all_allocations",
"EventCode": "0x41",
"BriefDescription": "Miss Address Buffer (MAB) entries allocated by a Load-Store (LS) pipe for all types of allocations.",
- "UMask": "0x7f"
+ "UMask": "0x0f"
},
{
"EventName": "ls_dmnd_fills_from_sys.local_l2",
diff --git a/tools/perf/tests/shell/stat.sh b/tools/perf/tests/shell/stat.sh
index 62f13dfeae8e4..a76ccf2deb563 100755
--- a/tools/perf/tests/shell/stat.sh
+++ b/tools/perf/tests/shell/stat.sh
@@ -18,7 +18,7 @@ test_default_stat() {

test_stat_record_report() {
echo "stat record and report test"
- if ! perf stat record -o - true | perf stat report -i - 2>&1 | \
+ if ! perf stat record -e task-clock -o - true | perf stat report -i - 2>&1 | \
grep -E -q "Performance counter stats for 'pipe':"
then
echo "stat record and report test [Failed]"
@@ -30,7 +30,7 @@ test_stat_record_report() {

test_stat_record_script() {
echo "stat record and script test"
- if ! perf stat record -o - true | perf script -i - 2>&1 | \
+ if ! perf stat record -e task-clock -o - true | perf script -i - 2>&1 | \
grep -E -q "CPU[[:space:]]+THREAD[[:space:]]+VAL[[:space:]]+ENA[[:space:]]+RUN[[:space:]]+TIME[[:space:]]+EVENT"
then
echo "stat record and script test [Failed]"
@@ -159,7 +159,7 @@ test_hybrid() {
fi

# Run default Perf stat
- cycles_events=$(perf stat -- true 2>&1 | grep -E "/cycles/[uH]*| cycles[:uH]* " -c)
+ cycles_events=$(perf stat -a -- sleep 0.1 2>&1 | grep -E "/cpu-cycles/[uH]*| cpu-cycles[:uH]* " | wc -l)

# The expectation is that default output will have a cycles events on each
# hybrid PMU. In situations with no cycles PMU events, like virtualized, this
diff --git a/tools/perf/util/disasm.c b/tools/perf/util/disasm.c
index a228a7ba30caa..2dc93199ac258 100644
--- a/tools/perf/util/disasm.c
+++ b/tools/perf/util/disasm.c
@@ -79,7 +79,7 @@ static int arch__grow_instructions(struct arch *arch)
if (new_instructions == NULL)
return -1;

- memcpy(new_instructions, arch->instructions, arch->nr_instructions);
+ memcpy(new_instructions, arch->instructions, arch->nr_instructions * sizeof(struct ins));
goto out_update_instructions;
}

diff --git a/tools/perf/util/evsel_fprintf.c b/tools/perf/util/evsel_fprintf.c
index c2c0500d5da99..2108e02d4b6cd 100644
--- a/tools/perf/util/evsel_fprintf.c
+++ b/tools/perf/util/evsel_fprintf.c
@@ -180,8 +180,12 @@ int sample__fprintf_callchain(struct perf_sample *sample, int left_alignment,
if (print_dso && (!sym || !sym->inlined))
printed += map__fprintf_dsoname_dsoff(map, print_dsoff, addr, fp);

- if (print_srcline)
- printed += map__fprintf_srcline(map, addr, "\n ", fp);
+ if (print_srcline) {
+ if (node->srcline)
+ printed += fprintf(fp, "\n %s", node->srcline);
+ else
+ printed += map__fprintf_srcline(map, addr, "\n ", fp);
+ }

if (sym && sym->inlined)
printed += fprintf(fp, " (inlined)");
diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
index 09c9cc326c08d..67133b60b03cd 100644
--- a/tools/perf/util/maps.c
+++ b/tools/perf/util/maps.c
@@ -664,6 +664,7 @@ int maps__find_ams(struct maps *maps, struct addr_map_symbol *ams)
if (ams->addr < map__start(ams->ms.map) || ams->addr >= map__end(ams->ms.map)) {
if (maps == NULL)
return -1;
+ map__put(ams->ms.map);
ams->ms.map = maps__find(maps, ams->addr);
if (ams->ms.map == NULL)
return -1;
diff --git a/tools/perf/util/unwind-libdw.c b/tools/perf/util/unwind-libdw.c
index bde216e630d29..216f0d3762fda 100644
--- a/tools/perf/util/unwind-libdw.c
+++ b/tools/perf/util/unwind-libdw.c
@@ -133,8 +133,8 @@ static int entry(u64 ip, struct unwind_info *ui)
}

e->ip = ip;
- e->ms.maps = al.maps;
- e->ms.map = al.map;
+ e->ms.maps = maps__get(al.maps);
+ e->ms.map = map__get(al.map);
e->ms.sym = al.sym;

pr_debug("unwind: %s:ip = 0x%" PRIx64 " (0x%" PRIx64 ")\n",
@@ -319,6 +319,9 @@ int unwind__get_entries(unwind_entry_cb_t cb, void *arg,
if (err)
pr_debug("unwind: failed with '%s'\n", dwfl_errmsg(-1));

+ for (i = 0; i < ui->idx; i++)
+ map_symbol__exit(&ui->entries[i].ms);
+
dwfl_end(ui->dwfl);
free(ui);
return 0;
diff --git a/tools/power/cpupower/lib/cpuidle.c b/tools/power/cpupower/lib/cpuidle.c
index f2c1139adf716..bd857ee7541a7 100644
--- a/tools/power/cpupower/lib/cpuidle.c
+++ b/tools/power/cpupower/lib/cpuidle.c
@@ -150,6 +150,7 @@ unsigned long long cpuidle_state_get_one_value(unsigned int cpu,
if (len == 0)
return 0;

+ errno = 0;
value = strtoull(linebuf, &endp, 0);

if (endp == linebuf || errno == ERANGE)
diff --git a/tools/power/x86/intel-speed-select/isst-config.c b/tools/power/x86/intel-speed-select/isst-config.c
index 5127be34869eb..07729d376f018 100644
--- a/tools/power/x86/intel-speed-select/isst-config.c
+++ b/tools/power/x86/intel-speed-select/isst-config.c
@@ -932,9 +932,11 @@ int isolate_cpus(struct isst_id *id, int mask_size, cpu_set_t *cpu_mask, int lev
ret = write(fd, "member", strlen("member"));
if (ret == -1) {
printf("Can't update to member\n");
+ close(fd);
return ret;
}

+ close(fd);
return 0;
}

diff --git a/tools/spi/.gitignore b/tools/spi/.gitignore
index 14ddba3d21957..038261b34ed83 100644
--- a/tools/spi/.gitignore
+++ b/tools/spi/.gitignore
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0-only
spidev_fdx
spidev_test
+include/
diff --git a/tools/testing/selftests/bpf/prog_tests/wq.c b/tools/testing/selftests/bpf/prog_tests/wq.c
index 99e438fe12acd..15ac8e6d17450 100644
--- a/tools/testing/selftests/bpf/prog_tests/wq.c
+++ b/tools/testing/selftests/bpf/prog_tests/wq.c
@@ -16,12 +16,12 @@ void serial_test_wq(void)
/* re-run the success test to check if the timer was actually executed */

wq_skel = wq__open_and_load();
- if (!ASSERT_OK_PTR(wq_skel, "wq_skel_load"))
+ if (!ASSERT_OK_PTR(wq_skel, "wq__open_and_load"))
return;

err = wq__attach(wq_skel);
if (!ASSERT_OK(err, "wq_attach"))
- return;
+ goto clean_up;

prog_fd = bpf_program__fd(wq_skel->progs.test_syscall_array_sleepable);
err = bpf_prog_test_run_opts(prog_fd, &topts);
@@ -31,6 +31,7 @@ void serial_test_wq(void)
usleep(50); /* 10 usecs should be enough, but give it extra */

ASSERT_EQ(wq_skel->bss->ok_sleepable, (1 << 1), "ok_sleepable");
+clean_up:
wq__destroy(wq_skel);
}

diff --git a/tools/testing/selftests/bpf/veristat.c b/tools/testing/selftests/bpf/veristat.c
index 7b6b9c4cadb57..e9d7acff6bbbc 100644
--- a/tools/testing/selftests/bpf/veristat.c
+++ b/tools/testing/selftests/bpf/veristat.c
@@ -1424,7 +1424,7 @@ static void output_stats(const struct verif_stats *s, enum resfmt fmt, bool last
if (last && fmt == RESFMT_TABLE) {
output_header_underlines();
printf("Done. Processed %d files, %d programs. Skipped %d files, %d programs.\n",
- env.files_processed, env.files_skipped, env.progs_processed, env.progs_skipped);
+ env.files_processed, env.progs_processed, env.files_skipped, env.progs_skipped);
}
}

diff --git a/tools/testing/selftests/drivers/net/mlxsw/tc_restrictions.sh b/tools/testing/selftests/drivers/net/mlxsw/tc_restrictions.sh
index 0441a18f098b1..aac8ef490feb8 100755
--- a/tools/testing/selftests/drivers/net/mlxsw/tc_restrictions.sh
+++ b/tools/testing/selftests/drivers/net/mlxsw/tc_restrictions.sh
@@ -317,7 +317,7 @@ police_limits_test()

tc filter add dev $swp1 ingress pref 1 proto ip handle 101 \
flower skip_sw \
- action police rate 0.5kbit burst 1m conform-exceed drop/ok
+ action police rate 0.5kbit burst 2k conform-exceed drop/ok
check_fail $? "Incorrect success to add police action with too low rate"

tc filter add dev $swp1 ingress pref 1 proto ip handle 101 \
@@ -327,7 +327,7 @@ police_limits_test()

tc filter add dev $swp1 ingress pref 1 proto ip handle 101 \
flower skip_sw \
- action police rate 1.5kbit burst 1m conform-exceed drop/ok
+ action police rate 1.5kbit burst 2k conform-exceed drop/ok
check_err $? "Failed to add police action with low rate"

tc filter del dev $swp1 ingress protocol ip pref 1 handle 101 flower
diff --git a/tools/testing/selftests/memfd/memfd_test.c b/tools/testing/selftests/memfd/memfd_test.c
index 0a0b555160280..b518699a71038 100644
--- a/tools/testing/selftests/memfd/memfd_test.c
+++ b/tools/testing/selftests/memfd/memfd_test.c
@@ -18,6 +18,9 @@
#include <sys/stat.h>
#include <sys/syscall.h>
#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ipc.h>
+#include <sys/sem.h>
#include <unistd.h>
#include <ctype.h>

@@ -39,6 +42,20 @@
F_SEAL_EXEC)

#define MFD_NOEXEC_SEAL 0x0008U
+union semun {
+ int val;
+ struct semid_ds *buf;
+ unsigned short int *array;
+ struct seminfo *__buf;
+};
+
+/*
+ * we use semaphores on nested wait tasks due the use of CLONE_NEWPID: the
+ * child will be PID 1 and can't send SIGSTOP to themselves due special
+ * treatment of the init task, so the SIGSTOP/SIGCONT synchronization
+ * approach can't be used here.
+ */
+#define SEM_KEY 0xdeadbeef

/*
* Default is not to test hugetlbfs
@@ -1291,8 +1308,22 @@ static int sysctl_nested(void *arg)

static int sysctl_nested_wait(void *arg)
{
- /* Wait for a SIGCONT. */
- kill(getpid(), SIGSTOP);
+ int sem = semget(SEM_KEY, 1, 0600);
+ struct sembuf sembuf;
+
+ if (sem < 0) {
+ perror("semget:");
+ abort();
+ }
+ sembuf.sem_num = 0;
+ sembuf.sem_flg = 0;
+ sembuf.sem_op = 0;
+
+ if (semop(sem, &sembuf, 1) < 0) {
+ perror("semop:");
+ abort();
+ }
+
return sysctl_nested(arg);
}

@@ -1313,7 +1344,9 @@ static void test_sysctl_sysctl2_failset(void)

static int sysctl_nested_child(void *arg)
{
- int pid;
+ int pid, sem;
+ union semun semun;
+ struct sembuf sembuf;

printf("%s nested sysctl 0\n", memfd_str);
sysctl_assert_write("0");
@@ -1347,23 +1380,53 @@ static int sysctl_nested_child(void *arg)
test_sysctl_sysctl2_failset);
join_thread(pid);

+ sem = semget(SEM_KEY, 1, IPC_CREAT | 0600);
+ if (sem < 0) {
+ perror("semget:");
+ return 1;
+ }
+ semun.val = 1;
+ sembuf.sem_op = -1;
+ sembuf.sem_flg = 0;
+ sembuf.sem_num = 0;
+
/* Verify that the rules are actually inherited after fork. */
printf("%s nested sysctl 0 -> 1 after fork\n", memfd_str);
sysctl_assert_write("0");

+ if (semctl(sem, 0, SETVAL, semun) < 0) {
+ perror("semctl:");
+ return 1;
+ }
+
pid = spawn_thread(CLONE_NEWPID, sysctl_nested_wait,
test_sysctl_sysctl1_failset);
sysctl_assert_write("1");
- kill(pid, SIGCONT);
+
+ /* Allow child to continue */
+ if (semop(sem, &sembuf, 1) < 0) {
+ perror("semop:");
+ return 1;
+ }
join_thread(pid);

printf("%s nested sysctl 0 -> 2 after fork\n", memfd_str);
sysctl_assert_write("0");

+ if (semctl(sem, 0, SETVAL, semun) < 0) {
+ perror("semctl:");
+ return 1;
+ }
+
pid = spawn_thread(CLONE_NEWPID, sysctl_nested_wait,
test_sysctl_sysctl2_failset);
sysctl_assert_write("2");
- kill(pid, SIGCONT);
+
+ /* Allow child to continue */
+ if (semop(sem, &sembuf, 1) < 0) {
+ perror("semop:");
+ return 1;
+ }
join_thread(pid);

/*
@@ -1373,28 +1436,62 @@ static int sysctl_nested_child(void *arg)
*/
printf("%s nested sysctl 2 -> 1 after fork\n", memfd_str);
sysctl_assert_write("2");
+
+ if (semctl(sem, 0, SETVAL, semun) < 0) {
+ perror("semctl:");
+ return 1;
+ }
+
pid = spawn_thread(CLONE_NEWPID, sysctl_nested_wait,
test_sysctl_sysctl2);
sysctl_assert_write("1");
- kill(pid, SIGCONT);
+
+ /* Allow child to continue */
+ if (semop(sem, &sembuf, 1) < 0) {
+ perror("semop:");
+ return 1;
+ }
join_thread(pid);

printf("%s nested sysctl 2 -> 0 after fork\n", memfd_str);
sysctl_assert_write("2");
+
+ if (semctl(sem, 0, SETVAL, semun) < 0) {
+ perror("semctl:");
+ return 1;
+ }
+
pid = spawn_thread(CLONE_NEWPID, sysctl_nested_wait,
test_sysctl_sysctl2);
sysctl_assert_write("0");
- kill(pid, SIGCONT);
+
+ /* Allow child to continue */
+ if (semop(sem, &sembuf, 1) < 0) {
+ perror("semop:");
+ return 1;
+ }
join_thread(pid);

printf("%s nested sysctl 1 -> 0 after fork\n", memfd_str);
sysctl_assert_write("1");
+
+ if (semctl(sem, 0, SETVAL, semun) < 0) {
+ perror("semctl:");
+ return 1;
+ }
+
pid = spawn_thread(CLONE_NEWPID, sysctl_nested_wait,
test_sysctl_sysctl1);
sysctl_assert_write("0");
- kill(pid, SIGCONT);
+ /* Allow child to continue */
+ if (semop(sem, &sembuf, 1) < 0) {
+ perror("semop:");
+ return 1;
+ }
join_thread(pid);

+ semctl(sem, 0, IPC_RMID);
+
return 0;
}

diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
index e1fe16bcbbe88..fa6713892d82d 100755
--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
@@ -290,7 +290,7 @@ function run_test() {
setup_cgroup "hugetlb_cgroup_test" "$cgroup_limit" "$reservation_limit"

mkdir -p /mnt/huge
- mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+ mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge

write_hugetlbfs_and_get_usage "hugetlb_cgroup_test" "$size" "$populate" \
"$write" "/mnt/huge/test" "$method" "$private" "$expect_failure" \
@@ -344,7 +344,7 @@ function run_multiple_cgroup_test() {
setup_cgroup "hugetlb_cgroup_test2" "$cgroup_limit2" "$reservation_limit2"

mkdir -p /mnt/huge
- mount -t hugetlbfs -o pagesize=${MB}M,size=256M none /mnt/huge
+ mount -t hugetlbfs -o pagesize=${MB}M none /mnt/huge

write_hugetlbfs_and_get_usage "hugetlb_cgroup_test1" "$size1" \
"$populate1" "$write1" "/mnt/huge/test1" "$method" "$private" \
diff --git a/tools/testing/selftests/mm/pagemap_ioctl.c b/tools/testing/selftests/mm/pagemap_ioctl.c
index bcc73b4e805c6..805017fd9bdbf 100644
--- a/tools/testing/selftests/mm/pagemap_ioctl.c
+++ b/tools/testing/selftests/mm/pagemap_ioctl.c
@@ -34,8 +34,8 @@
#define PAGEMAP "/proc/self/pagemap"
int pagemap_fd;
int uffd;
-int page_size;
-int hpage_size;
+unsigned long page_size;
+unsigned int hpage_size;
const char *progname;

#define LEN(region) ((region.end - region.start)/page_size)
@@ -184,7 +184,7 @@ void *gethugetlb_mem(int size, int *shmid)

int userfaultfd_tests(void)
{
- int mem_size, vec_size, written, num_pages = 16;
+ long mem_size, vec_size, written, num_pages = 16;
char *mem, *vec;

mem_size = num_pages * page_size;
@@ -213,7 +213,7 @@ int userfaultfd_tests(void)
written = pagemap_ioctl(mem, mem_size, vec, 1, PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
vec_size - 2, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (written < 0)
- ksft_exit_fail_msg("error %d %d %s\n", written, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", written, errno, strerror(errno));

ksft_test_result(written == 0, "%s all new pages must not be written (dirty)\n", __func__);

@@ -235,7 +235,9 @@ int get_reads(struct page_region *vec, int vec_size)

int sanity_tests_sd(void)
{
- int mem_size, vec_size, ret, ret2, ret3, i, num_pages = 1000, total_pages = 0;
+ unsigned long long mem_size, vec_size, i, total_pages = 0;
+ long ret, ret2, ret3;
+ int num_pages = 1000;
int total_writes, total_reads, reads, count;
struct page_region *vec, *vec2;
char *mem, *m[2];
@@ -321,9 +323,9 @@ int sanity_tests_sd(void)
ret = pagemap_ioctl(mem, mem_size, vec, vec_size, 0, 0, PAGE_IS_WRITTEN, 0,
0, PAGE_IS_WRITTEN);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

- ksft_test_result(ret == mem_size/(page_size * 2),
+ ksft_test_result((unsigned long long)ret == mem_size/(page_size * 2),
"%s Repeated pattern of written and non-written pages\n", __func__);

/* 4. Repeated pattern of written and non-written pages in parts */
@@ -331,21 +333,21 @@ int sanity_tests_sd(void)
PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
num_pages/2 - 2, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

ret2 = pagemap_ioctl(mem, mem_size, vec, 2, 0, 0, PAGE_IS_WRITTEN, 0, 0,
PAGE_IS_WRITTEN);
if (ret2 < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret2, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret2, errno, strerror(errno));

ret3 = pagemap_ioctl(mem, mem_size, vec, vec_size,
PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret3 < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret3, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret3, errno, strerror(errno));

ksft_test_result((ret + ret3) == num_pages/2 && ret2 == 2,
- "%s Repeated pattern of written and non-written pages in parts %d %d %d\n",
+ "%s Repeated pattern of written and non-written pages in parts %ld %ld %ld\n",
__func__, ret, ret3, ret2);

/* 5. Repeated pattern of written and non-written pages max_pages */
@@ -357,13 +359,13 @@ int sanity_tests_sd(void)
PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
num_pages/2, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

ret2 = pagemap_ioctl(mem, mem_size, vec, vec_size,
PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret2 < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret2, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret2, errno, strerror(errno));

ksft_test_result(ret == num_pages/2 && ret2 == 1,
"%s Repeated pattern of written and non-written pages max_pages\n",
@@ -378,12 +380,12 @@ int sanity_tests_sd(void)
PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
2, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

ret2 = pagemap_ioctl(mem, mem_size, vec2, vec_size, 0, 0,
PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret2 < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret2, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret2, errno, strerror(errno));

ksft_test_result(ret == 1 && LEN(vec[0]) == 2 &&
vec[0].start == (uintptr_t)(mem + page_size) &&
@@ -416,7 +418,7 @@ int sanity_tests_sd(void)
ret = pagemap_ioctl(m[1], mem_size, vec, 1, 0, 0, PAGE_IS_WRITTEN, 0, 0,
PAGE_IS_WRITTEN);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

ksft_test_result(ret == 1 && LEN(vec[0]) == mem_size/page_size,
"%s Two regions\n", __func__);
@@ -448,7 +450,7 @@ int sanity_tests_sd(void)
PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC, 0,
PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

for (i = 0; i < mem_size/page_size; i += 2)
mem[i * page_size]++;
@@ -457,7 +459,7 @@ int sanity_tests_sd(void)
PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
mem_size/(page_size*5), PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

total_pages += ret;

@@ -465,7 +467,7 @@ int sanity_tests_sd(void)
PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
mem_size/(page_size*5), PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

total_pages += ret;

@@ -473,7 +475,7 @@ int sanity_tests_sd(void)
PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
mem_size/(page_size*5), PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

total_pages += ret;

@@ -515,9 +517,9 @@ int sanity_tests_sd(void)
vec_size, PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

- if (ret > vec_size)
+ if ((unsigned long)ret > vec_size)
break;

reads = get_reads(vec, ret);
@@ -554,63 +556,63 @@ int sanity_tests_sd(void)
ret = pagemap_ioc(mem, 0, vec, vec_size, 0,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 0 && walk_end == (long)mem,
"Walk_end: Same start and end address\n");

ret = pagemap_ioc(mem, 0, vec, vec_size, PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 0 && walk_end == (long)mem,
"Walk_end: Same start and end with WP\n");

ret = pagemap_ioc(mem, 0, vec, 0, PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 0 && walk_end == (long)mem,
"Walk_end: Same start and end with 0 output buffer\n");

ret = pagemap_ioc(mem, mem_size, vec, vec_size, 0,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 1 && walk_end == (long)(mem + mem_size),
"Walk_end: Big vec\n");

ret = pagemap_ioc(mem, mem_size, vec, 1, 0,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 1 && walk_end == (long)(mem + mem_size),
"Walk_end: vec of minimum length\n");

ret = pagemap_ioc(mem, mem_size, vec, 1, 0,
vec_size, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 1 && walk_end == (long)(mem + mem_size),
"Walk_end: Max pages specified\n");

ret = pagemap_ioc(mem, mem_size, vec, vec_size, 0,
vec_size/2, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 1 && walk_end == (long)(mem + mem_size/2),
"Walk_end: Half max pages\n");

ret = pagemap_ioc(mem, mem_size, vec, vec_size, 0,
1, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 1 && walk_end == (long)(mem + page_size),
"Walk_end: 1 max page\n");

ret = pagemap_ioc(mem, mem_size, vec, vec_size, 0,
-1, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 1 && walk_end == (long)(mem + mem_size),
"Walk_end: max pages\n");

@@ -621,49 +623,49 @@ int sanity_tests_sd(void)
ret = pagemap_ioc(mem, mem_size, vec, vec_size, 0,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
- ksft_test_result(ret == vec_size/2 && walk_end == (long)(mem + mem_size),
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
+ ksft_test_result((unsigned long)ret == vec_size/2 && walk_end == (long)(mem + mem_size),
"Walk_end sparse: Big vec\n");

ret = pagemap_ioc(mem, mem_size, vec, 1, 0,
0, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 1 && walk_end == (long)(mem + page_size * 2),
"Walk_end sparse: vec of minimum length\n");

ret = pagemap_ioc(mem, mem_size, vec, 1, 0,
vec_size, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 1 && walk_end == (long)(mem + page_size * 2),
"Walk_end sparse: Max pages specified\n");

ret = pagemap_ioc(mem, mem_size, vec, vec_size/2, 0,
vec_size, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
- ksft_test_result(ret == vec_size/2 && walk_end == (long)(mem + mem_size),
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
+ ksft_test_result((unsigned long)ret == vec_size/2 && walk_end == (long)(mem + mem_size),
"Walk_end sparse: Max pages specified\n");

ret = pagemap_ioc(mem, mem_size, vec, vec_size, 0,
vec_size, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
- ksft_test_result(ret == vec_size/2 && walk_end == (long)(mem + mem_size),
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
+ ksft_test_result((unsigned long)ret == vec_size/2 && walk_end == (long)(mem + mem_size),
"Walk_end sparse: Max pages specified\n");

ret = pagemap_ioc(mem, mem_size, vec, vec_size, 0,
vec_size/2, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
- ksft_test_result(ret == vec_size/2 && walk_end == (long)(mem + mem_size),
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
+ ksft_test_result((unsigned long)ret == vec_size/2 && walk_end == (long)(mem + mem_size),
"Walk_endsparse : Half max pages\n");

ret = pagemap_ioc(mem, mem_size, vec, vec_size, 0,
1, PAGE_IS_WRITTEN, 0, 0, PAGE_IS_WRITTEN, &walk_end);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));
ksft_test_result(ret == 1 && walk_end == (long)(mem + page_size * 2),
"Walk_end: 1 max page\n");

@@ -674,9 +676,10 @@ int sanity_tests_sd(void)
return 0;
}

-int base_tests(char *prefix, char *mem, int mem_size, int skip)
+int base_tests(char *prefix, char *mem, unsigned long long mem_size, int skip)
{
- int vec_size, written;
+ unsigned long long vec_size;
+ int written;
struct page_region *vec, *vec2;

if (skip) {
@@ -799,8 +802,8 @@ int hpage_unit_tests(void)
char *map;
int ret, ret2;
size_t num_pages = 10;
- int map_size = hpage_size * num_pages;
- int vec_size = map_size/page_size;
+ unsigned long long map_size = hpage_size * num_pages;
+ unsigned long long vec_size = map_size/page_size;
struct page_region *vec, *vec2;

vec = malloc(sizeof(struct page_region) * vec_size);
@@ -992,7 +995,7 @@ int unmapped_region_tests(void)
{
void *start = (void *)0x10000000;
int written, len = 0x00040000;
- int vec_size = len / page_size;
+ long vec_size = len / page_size;
struct page_region *vec = malloc(sizeof(struct page_region) * vec_size);

/* 1. Get written pages */
@@ -1047,7 +1050,8 @@ static void test_simple(void)

int sanity_tests(void)
{
- int mem_size, vec_size, ret, fd, i, buf_size;
+ unsigned long long mem_size, vec_size;
+ long ret, fd, i, buf_size;
struct page_region *vec;
char *mem, *fmem;
struct stat sbuf;
@@ -1156,7 +1160,7 @@ int sanity_tests(void)

ret = stat(progname, &sbuf);
if (ret < 0)
- ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+ ksft_exit_fail_msg("error %ld %d %s\n", ret, errno, strerror(errno));

fmem = mmap(NULL, sbuf.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
if (fmem == MAP_FAILED)
@@ -1312,7 +1316,9 @@ static ssize_t get_dirty_pages_reset(char *mem, unsigned int count,
{
struct pm_scan_arg arg = {0};
struct page_region rgns[256];
- int i, j, cnt, ret;
+ unsigned long long i, j;
+ long ret;
+ int cnt;

arg.size = sizeof(struct pm_scan_arg);
arg.start = (uintptr_t)mem;
@@ -1330,7 +1336,7 @@ static ssize_t get_dirty_pages_reset(char *mem, unsigned int count,
ksft_exit_fail_msg("ioctl failed\n");

cnt = 0;
- for (i = 0; i < ret; ++i) {
+ for (i = 0; i < (unsigned long)ret; ++i) {
if (rgns[i].categories != PAGE_IS_WRITTEN)
ksft_exit_fail_msg("wrong flags\n");

@@ -1384,9 +1390,10 @@ void *thread_proc(void *mem)
static void transact_test(int page_size)
{
unsigned int i, count, extra_pages;
+ unsigned int c;
pthread_t th;
char *mem;
- int ret, c;
+ int ret;

if (pthread_barrier_init(&start_barrier, NULL, nthreads + 1))
ksft_exit_fail_msg("pthread_barrier_init\n");
@@ -1473,9 +1480,10 @@ static void transact_test(int page_size)
extra_thread_faults);
}

-int main(int argc, char *argv[])
+int main(int __attribute__((unused)) argc, char *argv[])
{
- int mem_size, shmid, buf_size, fd, i, ret;
+ int shmid, buf_size, fd, i, ret;
+ unsigned long long mem_size;
char *mem, *map, *fmem;
struct stat sbuf;

diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c
index a4a2805d3d3e7..4fa66a50e81a4 100644
--- a/tools/testing/selftests/mm/vm_util.c
+++ b/tools/testing/selftests/mm/vm_util.c
@@ -138,7 +138,7 @@ void clear_softdirty(void)
ksft_exit_fail_msg("opening clear_refs failed\n");
ret = write(fd, ctrl, strlen(ctrl));
close(fd);
- if (ret != strlen(ctrl))
+ if (ret != (signed int)strlen(ctrl))
ksft_exit_fail_msg("writing clear_refs failed\n");
}

diff --git a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
index 3f9d50f1ef9ec..1952023c43ba4 100755
--- a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
+++ b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d.sh
@@ -559,6 +559,21 @@ vxlan_encapped_ping_do()
local inner_tos=$1; shift
local outer_tos=$1; shift

+ local ipv4hdr=$(:
+ )"45:"$( : IP version + IHL
+ )"$inner_tos:"$( : IP TOS
+ )"00:54:"$( : IP total length
+ )"99:83:"$( : IP identification
+ )"40:00:"$( : IP flags + frag off
+ )"40:"$( : IP TTL
+ )"01:"$( : IP proto
+ )"CHECKSUM:"$( : IP header csum
+ )"c0:00:02:03:"$( : IP saddr: 192.0.2.3
+ )"c0:00:02:01"$( : IP daddr: 192.0.2.1
+ )
+ local checksum=$(payload_template_calc_checksum "$ipv4hdr")
+ ipv4hdr=$(payload_template_expand_checksum "$ipv4hdr" $checksum)
+
$MZ $dev -c $count -d 100msec -q \
-b $next_hop_mac -B $dest_ip \
-t udp tos=$outer_tos,sp=23456,dp=$VXPORT,p=$(:
@@ -569,16 +584,7 @@ vxlan_encapped_ping_do()
)"$dest_mac:"$( : ETH daddr
)"$(mac_get w2):"$( : ETH saddr
)"08:00:"$( : ETH type
- )"45:"$( : IP version + IHL
- )"$inner_tos:"$( : IP TOS
- )"00:54:"$( : IP total length
- )"99:83:"$( : IP identification
- )"40:00:"$( : IP flags + frag off
- )"40:"$( : IP TTL
- )"01:"$( : IP proto
- )"00:00:"$( : IP header csum
- )"c0:00:02:03:"$( : IP saddr: 192.0.2.3
- )"c0:00:02:01:"$( : IP daddr: 192.0.2.1
+ )"$ipv4hdr:"$( : IPv4 header
)"08:"$( : ICMP type
)"00:"$( : ICMP code
)"8b:f2:"$( : ICMP csum
diff --git a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh
index a603f7b0a08f0..e642feeada0e7 100755
--- a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh
+++ b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh
@@ -695,7 +695,7 @@ vxlan_encapped_ping_do()
)"6"$( : IP version
)"$inner_tos"$( : Traffic class
)"0:00:00:"$( : Flow label
- )"00:08:"$( : Payload length
+ )"00:03:"$( : Payload length
)"3a:"$( : Next header
)"04:"$( : Hop limit
)"$saddr:"$( : IP saddr