From patchwork Tue Mar 25 12:15:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876236 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 000E9257421; Tue, 25 Mar 2025 12:17:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905039; cv=none; b=Li23NvSFqYTUeBP5kpr+eztDRZQCED2QXCd5C8021aa1MYxvkrx3fI/UI7169EQ6aTJ0X/3iyCkLHSRcPdbWE8C98foe12QYLEdC8ukRyW1pf8MvNyX2XYHhs6jmyef25c8ncpaMyglZ+SsRtreRrXM0iG3UyAoSB1TLOtVuSeU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905039; c=relaxed/simple; bh=LCI/yc1ahl/Zp9x+QUBbqr6DoNgqTtvLT0+YbqUE8Kc=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=V/zvo+XkGniZ0eBsL6LVRWfifRC3wHgLCI52gyJyyOtV34rbwoxdfwaesLr+gijd377FL5t+AHzDyvKkWTEC9flb0xb4yIT44gL4Oh82iM61yh6pWlPuuYNKhD2Hlb1Clb0PUVg8M/P5w5XpR0JsHRmh4lpAwaV+5LdE5GD/+NI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=M7zSCa31; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="M7zSCa31" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 89036C4CEEF; Tue, 25 Mar 2025 12:17:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905038; bh=LCI/yc1ahl/Zp9x+QUBbqr6DoNgqTtvLT0+YbqUE8Kc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M7zSCa31T5EIDOyp2pisnsb36omDKGCK3CtmKd0jmWms5JTGy4cL6gqroHU+U6Do3 GF5N76/FLWX7xtVasdLhKMcInuvkPjYr3/XnVSZ0Pobo+jSDrMTbHwC4OV0hO+sPFI ioSKnCYcXTCDbUuV2wyGraJbUwJ9AzzjW2UTGrk2nCywWovoornRX4ZXKz0LVTxi7r qUcDCGXPxGB7Nv/XpwP22Vm+8YPGohVwdZX0Qz0dza/5WXPU3SnFd3lRUYrc4XYmk2 cIekIVLnOX0Kw3qDsIdU2cHos9atrofsQ8fmXPfl7xnlXnTI5oe/zK85/xe5ly8pgc 65J8Dyq0AYcUQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 02/43] rv64ilp32_abi: riscv: Adapt Makefile and Kconfig Date: Tue, 25 Mar 2025 08:15:43 -0400 Message-Id: <20250325121624.523258-3-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Extend the ARCH_RV64I base with ABI_RV64ILP32 to compile the Linux kernel self into ILP32 on CONFIG_64BIT=y, minimizing the kernel's memory footprint and cache occupation. The 'cmd_cpp_lds_S' in scripts/Makefile.build uses cpp_flags for ld.s generation, so add "-mabi=xxx" to KBUILD_CPPFLAGS, just like what we've done in KBUILD_CLFAGS and KBUILD_AFLAGS. cmd_cpp_lds_S = $(CPP) $(cpp_flags) -P -U$(ARCH) The rv64ilp32 ABI reuses an rv64 toolchain whose default "-mabi=" is lp64, so add "-mabi=ilp32" to correct it. Add config entry with rv64ilp32.config fragment in Makefile: - rv64ilp32_defconfig Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/Kconfig | 12 ++++++++++-- arch/riscv/Makefile | 17 +++++++++++++++++ arch/riscv/configs/rv64ilp32.config | 1 + 3 files changed, 28 insertions(+), 2 deletions(-) create mode 100644 arch/riscv/configs/rv64ilp32.config diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 7612c52e9b1e..da2111b0111c 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -213,7 +213,7 @@ config RISCV select TRACE_IRQFLAGS_SUPPORT select UACCESS_MEMCPY if !MMU select USER_STACKTRACE_SUPPORT - select ZONE_DMA32 if 64BIT + select ZONE_DMA32 if 64BIT && !ABI_RV64ILP32 config RUSTC_SUPPORTS_RISCV def_bool y @@ -298,6 +298,7 @@ config PAGE_OFFSET config KASAN_SHADOW_OFFSET hex depends on KASAN_GENERIC + default 0x70000000 if ABI_RV64ILP32 default 0xdfffffff00000000 if 64BIT default 0xffffffff if 32BIT @@ -341,7 +342,7 @@ config FIX_EARLYCON_MEM config ILLEGAL_POINTER_VALUE hex - default 0 if 32BIT + default 0 if 32BIT || ABI_RV64ILP32 default 0xdead000000000000 if 64BIT config PGTABLE_LEVELS @@ -418,6 +419,13 @@ config ARCH_RV64I endchoice +config ABI_RV64ILP32 + bool "ABI RV64ILP32" + depends on 64BIT + help + Compile linux kernel self into RV64ILP32 ABI of RISC-V psabi + specification. + # We must be able to map all physical memory into the kernel, but the compiler # is still a bit more efficient when generating code if it's setup in a manner # such that it can only map 2GiB of memory. diff --git a/arch/riscv/Makefile b/arch/riscv/Makefile index 13fbc0f94238..76db01020a22 100644 --- a/arch/riscv/Makefile +++ b/arch/riscv/Makefile @@ -30,10 +30,21 @@ ifeq ($(CONFIG_ARCH_RV64I),y) BITS := 64 UTS_MACHINE := riscv64 +ifeq ($(CONFIG_ABI_RV64ILP32),y) + KBUILD_CPPFLAGS += -mabi=ilp32 + + KBUILD_CFLAGS += -mabi=ilp32 + KBUILD_AFLAGS += -mabi=ilp32 + + KBUILD_LDFLAGS += -melf32lriscv +else + KBUILD_CPPFLAGS += -mabi=lp64 + KBUILD_CFLAGS += -mabi=lp64 KBUILD_AFLAGS += -mabi=lp64 KBUILD_LDFLAGS += -melf64lriscv +endif KBUILD_RUSTFLAGS += -Ctarget-cpu=generic-rv64 --target=riscv64imac-unknown-none-elf \ -Cno-redzone @@ -41,6 +52,8 @@ else BITS := 32 UTS_MACHINE := riscv32 + KBUILD_CPPFLAGS += -mabi=ilp32 + KBUILD_CFLAGS += -mabi=ilp32 KBUILD_AFLAGS += -mabi=ilp32 KBUILD_LDFLAGS += -melf32lriscv @@ -224,6 +237,10 @@ PHONY += rv32_nommu_virt_defconfig rv32_nommu_virt_defconfig: $(Q)$(MAKE) -f $(srctree)/Makefile nommu_virt_defconfig 32-bit.config +PHONY += rv64ilp32_defconfig +rv64ilp32_defconfig: + $(Q)$(MAKE) -f $(srctree)/Makefile defconfig rv64ilp32.config + define archhelp echo ' Image - Uncompressed kernel image (arch/riscv/boot/Image)' echo ' Image.gz - Compressed kernel image (arch/riscv/boot/Image.gz)' diff --git a/arch/riscv/configs/rv64ilp32.config b/arch/riscv/configs/rv64ilp32.config new file mode 100644 index 000000000000..07536586e169 --- /dev/null +++ b/arch/riscv/configs/rv64ilp32.config @@ -0,0 +1 @@ +CONFIG_ABI_RV64ILP32=y From patchwork Tue Mar 25 12:15:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876235 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 71AB6A937; Tue, 25 Mar 2025 12:17:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905067; cv=none; b=ZKaVRU8EPZgmo8Q7A6AQbD7+uZWtYLC1H62V1XMRw1FpKLBLBouVjkM9TpYSOvkVUQJ59skV+VxT0T9rQMvzc/sQU45Y4NpkiwYj/BHKgOhofsb7NlA8Y+5dhJKAMTnYds96GPZWzxEixv2MgpumtnOxbVeWsK893oc4vQR5kPs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905067; c=relaxed/simple; bh=rA1Mf4w9umBEMm9/yBwHJ19r6aOYZm1tJ3T1EWy4X2U=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=AKwWAx/j3bI9U5If5Iea2Eq/FQtP5D8l3Db0bR1dsaVpUPzIRZQBiIyNieZbxXKH2paMgkn02AVX5yqzAGrvs/y2FyRTOP2are/8jJcdVg/jepJ3qh4QBxOaBuNTT+yBNMa2WKlx8Dg4IR0dPqogi+ulam2SeNYbxvUJrFtxkfg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Qd3pzD+U; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Qd3pzD+U" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 82E24C4CEEE; Tue, 25 Mar 2025 12:17:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905066; bh=rA1Mf4w9umBEMm9/yBwHJ19r6aOYZm1tJ3T1EWy4X2U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Qd3pzD+Ui6oxuJbCvCrC/EGtTe0Vt4+GBJSDObtd6gqIkKOGKSsPNFv+DVBxiXQzu JG1nCmPj4TEtCZyY8gcjS8GWhs6YeefSvvlY6EPQqi76ow/h0zC07L/VnjG7fIAey2 /KD+F+jpDFhBv/ilLYMMexTuxTpEwLJiMyXjdEPFiGtgVDPDSuArKJRcGRx6Ffd+eC sAq8p3etLhnpV68JxU7AtRmmAiKAh3UNd5bQtclUDBDWEDQrRwaPNC57lIgQXoPSTV P060ndNbFm1EyAq8QUZTKTTvozFo2CznWNFAj8EQLXwJXILqE4h2hJ6YuRKPidj7Fi 7VWUd3QGdvOOA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 04/43] rv64ilp32_abi: riscv: Introduce xlen_t to adapt __riscv_xlen != BITS_PER_LONG Date: Tue, 25 Mar 2025 08:15:45 -0400 Message-Id: <20250325121624.523258-5-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Upon RV64ILP32 ABI definition, BITS_PER_LONG couldn't determine XLEN due to its 32-bit value when CONFIG_64BIT=y. Hence, we've introduced xlen_t and utilized CONFIG_64BIT or __riscv_xlen == 64 to determine register width. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/checksum.h | 4 ++ arch/riscv/include/asm/csr.h | 15 ++-- arch/riscv/include/asm/processor.h | 10 +-- arch/riscv/include/asm/ptrace.h | 92 ++++++++++++------------ arch/riscv/include/asm/sparsemem.h | 2 +- arch/riscv/include/asm/switch_to.h | 4 +- arch/riscv/include/asm/thread_info.h | 2 +- arch/riscv/include/asm/timex.h | 4 +- arch/riscv/include/uapi/asm/elf.h | 4 +- arch/riscv/include/uapi/asm/ptrace.h | 97 ++++++++++++++------------ arch/riscv/include/uapi/asm/ucontext.h | 7 +- arch/riscv/include/uapi/asm/unistd.h | 2 +- arch/riscv/kernel/compat_signal.c | 4 +- arch/riscv/kernel/process.c | 8 +-- arch/riscv/kernel/signal.c | 4 +- arch/riscv/kernel/traps.c | 4 +- arch/riscv/kernel/vector.c | 2 +- arch/riscv/mm/fault.c | 2 +- 18 files changed, 143 insertions(+), 124 deletions(-) diff --git a/arch/riscv/include/asm/checksum.h b/arch/riscv/include/asm/checksum.h index 88e6f1499e88..e887f0983b69 100644 --- a/arch/riscv/include/asm/checksum.h +++ b/arch/riscv/include/asm/checksum.h @@ -36,7 +36,11 @@ __sum16 csum_ipv6_magic(const struct in6_addr *saddr, */ static inline __sum16 ip_fast_csum(const void *iph, unsigned int ihl) { +#if __riscv_xlen == 64 + unsigned long long csum = 0; +#else unsigned long csum = 0; +#endif int pos = 0; do { diff --git a/arch/riscv/include/asm/csr.h b/arch/riscv/include/asm/csr.h index 25f7c5afea3a..4339600e3c56 100644 --- a/arch/riscv/include/asm/csr.h +++ b/arch/riscv/include/asm/csr.h @@ -522,10 +522,11 @@ #define IE_EIE (_AC(0x1, UXL) << RV_IRQ_EXT) #ifndef __ASSEMBLY__ +#include #define csr_swap(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrrw %0, " __ASM_STR(csr) ", %1"\ : "=r" (__v) : "rK" (__v) \ : "memory"); \ @@ -534,7 +535,7 @@ #define csr_read(csr) \ ({ \ - register unsigned long __v; \ + register xlen_t __v; \ __asm__ __volatile__ ("csrr %0, " __ASM_STR(csr) \ : "=r" (__v) : \ : "memory"); \ @@ -543,7 +544,7 @@ #define csr_write(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0" \ : : "rK" (__v) \ : "memory"); \ @@ -551,7 +552,7 @@ #define csr_read_set(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrrs %0, " __ASM_STR(csr) ", %1"\ : "=r" (__v) : "rK" (__v) \ : "memory"); \ @@ -560,7 +561,7 @@ #define csr_set(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0" \ : : "rK" (__v) \ : "memory"); \ @@ -568,7 +569,7 @@ #define csr_read_clear(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrrc %0, " __ASM_STR(csr) ", %1"\ : "=r" (__v) : "rK" (__v) \ : "memory"); \ @@ -577,7 +578,7 @@ #define csr_clear(csr, val) \ ({ \ - unsigned long __v = (unsigned long)(val); \ + xlen_t __v = (xlen_t)(val); \ __asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0" \ : : "rK" (__v) \ : "memory"); \ diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index 5f56eb9d114a..ca57a650c3d2 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -45,7 +45,7 @@ * This decides where the kernel will search for a free chunk of vm * space during mmap's. */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define TASK_UNMAPPED_BASE PAGE_ALIGN((UL(1) << MMAP_MIN_VA_BITS) / 3) #else #define TASK_UNMAPPED_BASE PAGE_ALIGN(TASK_SIZE / 3) @@ -99,10 +99,10 @@ struct thread_struct { /* Callee-saved registers */ unsigned long ra; unsigned long sp; /* Kernel mode stack */ - unsigned long s[12]; /* s[0]: frame pointer */ + xlen_t s[12]; /* s[0]: frame pointer */ struct __riscv_d_ext_state fstate; unsigned long bad_cause; - unsigned long envcfg; + xlen_t envcfg; u32 riscv_v_flags; u32 vstate_ctrl; struct __riscv_v_ext_state vstate; @@ -133,8 +133,8 @@ static inline void arch_thread_struct_whitelist(unsigned long *offset, ((struct pt_regs *)(task_stack_page(tsk) + THREAD_SIZE \ - ALIGN(sizeof(struct pt_regs), STACK_ALIGN))) -#define KSTK_EIP(tsk) (task_pt_regs(tsk)->epc) -#define KSTK_ESP(tsk) (task_pt_regs(tsk)->sp) +#define KSTK_EIP(tsk) (ulong)(task_pt_regs(tsk)->epc) +#define KSTK_ESP(tsk) (ulong)(task_pt_regs(tsk)->sp) /* Do necessary setup to start up a newly executed thread. */ diff --git a/arch/riscv/include/asm/ptrace.h b/arch/riscv/include/asm/ptrace.h index b5b0adcc85c1..a0ed27c2346b 100644 --- a/arch/riscv/include/asm/ptrace.h +++ b/arch/riscv/include/asm/ptrace.h @@ -13,51 +13,51 @@ #ifndef __ASSEMBLY__ struct pt_regs { - unsigned long epc; - unsigned long ra; - unsigned long sp; - unsigned long gp; - unsigned long tp; - unsigned long t0; - unsigned long t1; - unsigned long t2; - unsigned long s0; - unsigned long s1; - unsigned long a0; - unsigned long a1; - unsigned long a2; - unsigned long a3; - unsigned long a4; - unsigned long a5; - unsigned long a6; - unsigned long a7; - unsigned long s2; - unsigned long s3; - unsigned long s4; - unsigned long s5; - unsigned long s6; - unsigned long s7; - unsigned long s8; - unsigned long s9; - unsigned long s10; - unsigned long s11; - unsigned long t3; - unsigned long t4; - unsigned long t5; - unsigned long t6; + xlen_t epc; + xlen_t ra; + xlen_t sp; + xlen_t gp; + xlen_t tp; + xlen_t t0; + xlen_t t1; + xlen_t t2; + xlen_t s0; + xlen_t s1; + xlen_t a0; + xlen_t a1; + xlen_t a2; + xlen_t a3; + xlen_t a4; + xlen_t a5; + xlen_t a6; + xlen_t a7; + xlen_t s2; + xlen_t s3; + xlen_t s4; + xlen_t s5; + xlen_t s6; + xlen_t s7; + xlen_t s8; + xlen_t s9; + xlen_t s10; + xlen_t s11; + xlen_t t3; + xlen_t t4; + xlen_t t5; + xlen_t t6; /* Supervisor/Machine CSRs */ - unsigned long status; - unsigned long badaddr; - unsigned long cause; + xlen_t status; + xlen_t badaddr; + xlen_t cause; /* a0 value before the syscall */ - unsigned long orig_a0; + xlen_t orig_a0; }; #define PTRACE_SYSEMU 0x1f #define PTRACE_SYSEMU_SINGLESTEP 0x20 #ifdef CONFIG_64BIT -#define REG_FMT "%016lx" +#define REG_FMT "%016llx" #else #define REG_FMT "%08lx" #endif @@ -69,12 +69,12 @@ struct pt_regs { /* Helpers for working with the instruction pointer */ static inline unsigned long instruction_pointer(struct pt_regs *regs) { - return regs->epc; + return (unsigned long)regs->epc; } static inline void instruction_pointer_set(struct pt_regs *regs, unsigned long val) { - regs->epc = val; + regs->epc = (xlen_t)val; } #define profile_pc(regs) instruction_pointer(regs) @@ -82,40 +82,40 @@ static inline void instruction_pointer_set(struct pt_regs *regs, /* Helpers for working with the user stack pointer */ static inline unsigned long user_stack_pointer(struct pt_regs *regs) { - return regs->sp; + return (unsigned long)regs->sp; } static inline void user_stack_pointer_set(struct pt_regs *regs, unsigned long val) { - regs->sp = val; + regs->sp = (xlen_t)val; } /* Valid only for Kernel mode traps. */ static inline unsigned long kernel_stack_pointer(struct pt_regs *regs) { - return regs->sp; + return (unsigned long)regs->sp; } /* Helpers for working with the frame pointer */ static inline unsigned long frame_pointer(struct pt_regs *regs) { - return regs->s0; + return (unsigned long)regs->s0; } static inline void frame_pointer_set(struct pt_regs *regs, unsigned long val) { - regs->s0 = val; + regs->s0 = (xlen_t)val; } static inline unsigned long regs_return_value(struct pt_regs *regs) { - return regs->a0; + return (unsigned long)regs->a0; } static inline void regs_set_return_value(struct pt_regs *regs, unsigned long val) { - regs->a0 = val; + regs->a0 = (xlen_t)val; } extern int regs_query_register_offset(const char *name); diff --git a/arch/riscv/include/asm/sparsemem.h b/arch/riscv/include/asm/sparsemem.h index 2f901a410586..68907698caa6 100644 --- a/arch/riscv/include/asm/sparsemem.h +++ b/arch/riscv/include/asm/sparsemem.h @@ -4,7 +4,7 @@ #define _ASM_RISCV_SPARSEMEM_H #ifdef CONFIG_SPARSEMEM -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define MAX_PHYSMEM_BITS 56 #else #define MAX_PHYSMEM_BITS 32 diff --git a/arch/riscv/include/asm/switch_to.h b/arch/riscv/include/asm/switch_to.h index 0e71eb82f920..6d01b0fc5a25 100644 --- a/arch/riscv/include/asm/switch_to.h +++ b/arch/riscv/include/asm/switch_to.h @@ -71,9 +71,9 @@ static __always_inline bool has_fpu(void) { return false; } #endif static inline void envcfg_update_bits(struct task_struct *task, - unsigned long mask, unsigned long val) + xlen_t mask, xlen_t val) { - unsigned long envcfg; + xlen_t envcfg; envcfg = (task->thread.envcfg & ~mask) | val; task->thread.envcfg = envcfg; diff --git a/arch/riscv/include/asm/thread_info.h b/arch/riscv/include/asm/thread_info.h index f5916a70879a..637a46fc7ed8 100644 --- a/arch/riscv/include/asm/thread_info.h +++ b/arch/riscv/include/asm/thread_info.h @@ -71,7 +71,7 @@ struct thread_info { * Used in handle_exception() to save a0, a1 and a2 before knowing if we * can access the kernel stack. */ - unsigned long a0, a1, a2; + xlen_t a0, a1, a2; #endif }; diff --git a/arch/riscv/include/asm/timex.h b/arch/riscv/include/asm/timex.h index a06697846e69..b5ca67b30d0b 100644 --- a/arch/riscv/include/asm/timex.h +++ b/arch/riscv/include/asm/timex.h @@ -8,7 +8,7 @@ #include -typedef unsigned long cycles_t; +typedef xlen_t cycles_t; #ifdef CONFIG_RISCV_M_MODE @@ -84,7 +84,7 @@ static inline u64 get_cycles64(void) #define ARCH_HAS_READ_CURRENT_TIMER static inline int read_current_timer(unsigned long *timer_val) { - *timer_val = get_cycles(); + *timer_val = (unsigned long)get_cycles(); return 0; } diff --git a/arch/riscv/include/uapi/asm/elf.h b/arch/riscv/include/uapi/asm/elf.h index 11a71b8533d5..9fc8c2e3556b 100644 --- a/arch/riscv/include/uapi/asm/elf.h +++ b/arch/riscv/include/uapi/asm/elf.h @@ -15,7 +15,7 @@ #include /* ELF register definitions */ -typedef unsigned long elf_greg_t; +typedef xlen_t elf_greg_t; typedef struct user_regs_struct elf_gregset_t; #define ELF_NGREG (sizeof(elf_gregset_t) / sizeof(elf_greg_t)) @@ -24,7 +24,7 @@ typedef __u64 elf_fpreg_t; typedef union __riscv_fp_state elf_fpregset_t; #define ELF_NFPREG (sizeof(struct __riscv_d_ext_state) / sizeof(elf_fpreg_t)) -#if __riscv_xlen == 64 +#if BITS_PER_LONG == 64 #define ELF_RISCV_R_SYM(r_info) ELF64_R_SYM(r_info) #define ELF_RISCV_R_TYPE(r_info) ELF64_R_TYPE(r_info) #else diff --git a/arch/riscv/include/uapi/asm/ptrace.h b/arch/riscv/include/uapi/asm/ptrace.h index a38268b19c3d..f040a2ba07b0 100644 --- a/arch/riscv/include/uapi/asm/ptrace.h +++ b/arch/riscv/include/uapi/asm/ptrace.h @@ -15,6 +15,14 @@ #define PTRACE_GETFDPIC_EXEC 0 #define PTRACE_GETFDPIC_INTERP 1 +#if __riscv_xlen == 64 +typedef u64 xlen_t; +#endif + +#if __riscv_xlen == 32 +typedef ulong xlen_t; +#endif + /* * User-mode register state for core dumps, ptrace, sigcontext * @@ -22,38 +30,38 @@ * struct user_regs_struct must form a prefix of struct pt_regs. */ struct user_regs_struct { - unsigned long pc; - unsigned long ra; - unsigned long sp; - unsigned long gp; - unsigned long tp; - unsigned long t0; - unsigned long t1; - unsigned long t2; - unsigned long s0; - unsigned long s1; - unsigned long a0; - unsigned long a1; - unsigned long a2; - unsigned long a3; - unsigned long a4; - unsigned long a5; - unsigned long a6; - unsigned long a7; - unsigned long s2; - unsigned long s3; - unsigned long s4; - unsigned long s5; - unsigned long s6; - unsigned long s7; - unsigned long s8; - unsigned long s9; - unsigned long s10; - unsigned long s11; - unsigned long t3; - unsigned long t4; - unsigned long t5; - unsigned long t6; + xlen_t pc; + xlen_t ra; + xlen_t sp; + xlen_t gp; + xlen_t tp; + xlen_t t0; + xlen_t t1; + xlen_t t2; + xlen_t s0; + xlen_t s1; + xlen_t a0; + xlen_t a1; + xlen_t a2; + xlen_t a3; + xlen_t a4; + xlen_t a5; + xlen_t a6; + xlen_t a7; + xlen_t s2; + xlen_t s3; + xlen_t s4; + xlen_t s5; + xlen_t s6; + xlen_t s7; + xlen_t s8; + xlen_t s9; + xlen_t s10; + xlen_t s11; + xlen_t t3; + xlen_t t4; + xlen_t t5; + xlen_t t6; }; struct __riscv_f_ext_state { @@ -98,12 +106,15 @@ union __riscv_fp_state { }; struct __riscv_v_ext_state { - unsigned long vstart; - unsigned long vl; - unsigned long vtype; - unsigned long vcsr; - unsigned long vlenb; - void *datap; + xlen_t vstart; + xlen_t vl; + xlen_t vtype; + xlen_t vcsr; + xlen_t vlenb; + union { + void *datap; + xlen_t pad; + }; /* * In signal handler, datap will be set a correct user stack offset * and vector registers will be copied to the address of datap @@ -112,11 +123,11 @@ struct __riscv_v_ext_state { }; struct __riscv_v_regset_state { - unsigned long vstart; - unsigned long vl; - unsigned long vtype; - unsigned long vcsr; - unsigned long vlenb; + xlen_t vstart; + xlen_t vl; + xlen_t vtype; + xlen_t vcsr; + xlen_t vlenb; char vreg[]; }; diff --git a/arch/riscv/include/uapi/asm/ucontext.h b/arch/riscv/include/uapi/asm/ucontext.h index 516bd0bb0da5..572b96c3ccf4 100644 --- a/arch/riscv/include/uapi/asm/ucontext.h +++ b/arch/riscv/include/uapi/asm/ucontext.h @@ -11,8 +11,11 @@ #include struct ucontext { - unsigned long uc_flags; - struct ucontext *uc_link; + xlen_t uc_flags; + union { + struct ucontext *uc_link; + xlen_t pad; + }; stack_t uc_stack; sigset_t uc_sigmask; /* diff --git a/arch/riscv/include/uapi/asm/unistd.h b/arch/riscv/include/uapi/asm/unistd.h index 81896bbbf727..e33dd5161b8d 100644 --- a/arch/riscv/include/uapi/asm/unistd.h +++ b/arch/riscv/include/uapi/asm/unistd.h @@ -16,7 +16,7 @@ */ #include -#if __BITS_PER_LONG == 64 +#if __riscv_xlen == 64 #include #else #include diff --git a/arch/riscv/kernel/compat_signal.c b/arch/riscv/kernel/compat_signal.c index 6ec4e34255a9..859104618f34 100644 --- a/arch/riscv/kernel/compat_signal.c +++ b/arch/riscv/kernel/compat_signal.c @@ -126,7 +126,7 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn) /* Always make any pending restarted system calls return -EINTR */ current->restart_block.fn = do_no_restart_syscall; - frame = (struct compat_rt_sigframe __user *)regs->sp; + frame = (struct compat_rt_sigframe __user *)(ulong)regs->sp; if (!access_ok(frame, sizeof(*frame))) goto badframe; @@ -150,7 +150,7 @@ COMPAT_SYSCALL_DEFINE0(rt_sigreturn) pr_info_ratelimited( "%s[%d]: bad frame in %s: frame=%p pc=%p sp=%p\n", task->comm, task_pid_nr(task), __func__, - frame, (void *)regs->epc, (void *)regs->sp); + frame, (void *)(ulong)regs->epc, (void *)(ulong)regs->sp); } force_sig(SIGSEGV); return 0; diff --git a/arch/riscv/kernel/process.c b/arch/riscv/kernel/process.c index 7c244de77180..5c827761f84b 100644 --- a/arch/riscv/kernel/process.c +++ b/arch/riscv/kernel/process.c @@ -65,8 +65,8 @@ void __show_regs(struct pt_regs *regs) show_regs_print_info(KERN_DEFAULT); if (!user_mode(regs)) { - pr_cont("epc : %pS\n", (void *)regs->epc); - pr_cont(" ra : %pS\n", (void *)regs->ra); + pr_cont("epc : %pS\n", (void *)(ulong)regs->epc); + pr_cont(" ra : %pS\n", (void *)(ulong)regs->ra); } pr_cont("epc : " REG_FMT " ra : " REG_FMT " sp : " REG_FMT "\n", @@ -272,7 +272,7 @@ long set_tagged_addr_ctrl(struct task_struct *task, unsigned long arg) unsigned long valid_mask = PR_PMLEN_MASK | PR_TAGGED_ADDR_ENABLE; struct thread_info *ti = task_thread_info(task); struct mm_struct *mm = task->mm; - unsigned long pmm; + xlen_t pmm; u8 pmlen; if (is_compat_thread(ti)) @@ -352,7 +352,7 @@ long get_tagged_addr_ctrl(struct task_struct *task) return ret; } -static bool try_to_set_pmm(unsigned long value) +static bool try_to_set_pmm(xlen_t value) { csr_set(CSR_ENVCFG, value); return (csr_read_clear(CSR_ENVCFG, ENVCFG_PMM) & ENVCFG_PMM) == value; diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c index 94e905eea1de..b3eb4154faf7 100644 --- a/arch/riscv/kernel/signal.c +++ b/arch/riscv/kernel/signal.c @@ -239,7 +239,7 @@ SYSCALL_DEFINE0(rt_sigreturn) /* Always make any pending restarted system calls return -EINTR */ current->restart_block.fn = do_no_restart_syscall; - frame = (struct rt_sigframe __user *)regs->sp; + frame = (struct rt_sigframe __user *)(ulong)regs->sp; if (!access_ok(frame, frame_size)) goto badframe; @@ -265,7 +265,7 @@ SYSCALL_DEFINE0(rt_sigreturn) pr_info_ratelimited( "%s[%d]: bad frame in %s: frame=%p pc=%p sp=%p\n", task->comm, task_pid_nr(task), __func__, - frame, (void *)regs->epc, (void *)regs->sp); + frame, (void *)(ulong)regs->epc, (void *)(ulong)regs->sp); } force_sig(SIGSEGV); return 0; diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c index 8ff8e8b36524..1fada4c7ddfa 100644 --- a/arch/riscv/kernel/traps.c +++ b/arch/riscv/kernel/traps.c @@ -118,7 +118,7 @@ void do_trap(struct pt_regs *regs, int signo, int code, unsigned long addr) if (show_unhandled_signals && unhandled_signal(tsk, signo) && printk_ratelimit()) { pr_info("%s[%d]: unhandled signal %d code 0x%x at 0x" REG_FMT, - tsk->comm, task_pid_nr(tsk), signo, code, addr); + tsk->comm, task_pid_nr(tsk), signo, code, (xlen_t)addr); print_vma_addr(KERN_CONT " in ", instruction_pointer(regs)); pr_cont("\n"); __show_regs(regs); @@ -281,7 +281,7 @@ void handle_break(struct pt_regs *regs) current->thread.bad_cause = regs->cause; if (user_mode(regs)) - force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)regs->epc); + force_sig_fault(SIGTRAP, TRAP_BRKPT, (void __user *)instruction_pointer(regs)); #ifdef CONFIG_KGDB else if (notify_die(DIE_TRAP, "EBREAK", regs, 0, regs->cause, SIGTRAP) == NOTIFY_STOP) diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c index 184f780c932d..884edd99e6b0 100644 --- a/arch/riscv/kernel/vector.c +++ b/arch/riscv/kernel/vector.c @@ -180,7 +180,7 @@ EXPORT_SYMBOL_GPL(riscv_v_vstate_ctrl_user_allowed); bool riscv_v_first_use_handler(struct pt_regs *regs) { - u32 __user *epc = (u32 __user *)regs->epc; + u32 __user *epc = (u32 __user *)(ulong)regs->epc; u32 insn = (u32)regs->badaddr; if (!(has_vector() || has_xtheadvector())) diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 0194324a0c50..fcc23350610e 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -78,7 +78,7 @@ static void die_kernel_fault(const char *msg, unsigned long addr, { bust_spinlocks(1); - pr_alert("Unable to handle kernel %s at virtual address " REG_FMT "\n", msg, + pr_alert("Unable to handle kernel %s at virtual address %08lx\n", msg, addr); bust_spinlocks(0); From patchwork Tue Mar 25 12:15:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876234 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9CB0E2571BF; Tue, 25 Mar 2025 12:18:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905098; cv=none; b=j67yYqqg0y6WqNl8J4rYQIUAmFo6pfTvgmvxQdEwRgD/qujWd/k/xAyvnTg/f6i3nWXg4kZ6FFASvXb1TtS4CqOCLs6hZpOCXCXM0dthnD8++QMzFpCJ1beq65ck2rqrpuPSvrEIXsJTEoEz6ZsioHRekQp/WIXY9EOzeTiYleI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905098; c=relaxed/simple; bh=UABL4vMSXTJ7ioKDShPOJnvQ/v/SXYqnkdRYWjffYuI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=SyJwV++7HcvMup5wz1WMiXdvGniqVOA9szkS+ycAZDN+s5VtFzB9Uvn38uRQ1d0ohCODLFbI3SNXT4UqPR6aoef5bmqj15daAE621Ns9c1uSvahOqPaXXkiqIrRNtYmVvTEadaCWhs1tBMaCkxLsVpY4QfEc0m4jLv6TOO2kBSY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=teUw2FHO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="teUw2FHO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3232CC4CEE4; Tue, 25 Mar 2025 12:18:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905095; bh=UABL4vMSXTJ7ioKDShPOJnvQ/v/SXYqnkdRYWjffYuI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=teUw2FHOlBQ1lrIKT7Hs+qUfkkWnEEpTWG7+L12tSB2mbphU+tEwWLTIzWHI0iiRJ DbQvjYXhNoIHuNkVkJjJR9grTurlc5egb+jrZsRmnn/7yCBhuoST5sWNFrEg4whn8S R8w4LPsMxn2q34XM1d2a9BJgQLJkEbNoee3acd75JHM7OSqPc/dbF55YqAnxMmxb3B GonCtkmj1hPTAVD6eZOTRDAsO7Au8f+9qjRty68GqzTjCoMQ60GAujyzoXiVezS8hQ Cw9dI4jq4Uyzfsd0C+8lhRsjuPYR84Qgic3GJeVkTgp4byQiRpTz3vrpRfdwMGr/ce BEShK8B1mJ11Q== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 06/43] rv64ilp32_abi: riscv: csum: Utilize 64-bit width to improve the performance Date: Tue, 25 Mar 2025 08:15:47 -0400 Message-Id: <20250325121624.523258-7-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI, derived from a 64-bit ISA, uses 32-bit BITS_PER_LONG. Therefore, checksum algorithm could utilize 64-bit width to improve the performance. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/lib/csum.c | 48 +++++++++++++++++++++---------------------- 1 file changed, 24 insertions(+), 24 deletions(-) diff --git a/arch/riscv/lib/csum.c b/arch/riscv/lib/csum.c index 7fb12c59e571..7139ab855349 100644 --- a/arch/riscv/lib/csum.c +++ b/arch/riscv/lib/csum.c @@ -22,17 +22,17 @@ __sum16 csum_ipv6_magic(const struct in6_addr *saddr, __u32 len, __u8 proto, __wsum csum) { unsigned int ulen, uproto; - unsigned long sum = (__force unsigned long)csum; + xlen_t sum = (__force xlen_t)csum; - sum += (__force unsigned long)saddr->s6_addr32[0]; - sum += (__force unsigned long)saddr->s6_addr32[1]; - sum += (__force unsigned long)saddr->s6_addr32[2]; - sum += (__force unsigned long)saddr->s6_addr32[3]; + sum += (__force xlen_t)saddr->s6_addr32[0]; + sum += (__force xlen_t)saddr->s6_addr32[1]; + sum += (__force xlen_t)saddr->s6_addr32[2]; + sum += (__force xlen_t)saddr->s6_addr32[3]; - sum += (__force unsigned long)daddr->s6_addr32[0]; - sum += (__force unsigned long)daddr->s6_addr32[1]; - sum += (__force unsigned long)daddr->s6_addr32[2]; - sum += (__force unsigned long)daddr->s6_addr32[3]; + sum += (__force xlen_t)daddr->s6_addr32[0]; + sum += (__force xlen_t)daddr->s6_addr32[1]; + sum += (__force xlen_t)daddr->s6_addr32[2]; + sum += (__force xlen_t)daddr->s6_addr32[3]; ulen = (__force unsigned int)htonl((unsigned int)len); sum += ulen; @@ -46,7 +46,7 @@ __sum16 csum_ipv6_magic(const struct in6_addr *saddr, */ if (IS_ENABLED(CONFIG_RISCV_ISA_ZBB) && IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) { - unsigned long fold_temp; + xlen_t fold_temp; /* * Zbb is likely available when the kernel is compiled with Zbb @@ -85,12 +85,12 @@ EXPORT_SYMBOL(csum_ipv6_magic); #define OFFSET_MASK 7 #endif -static inline __no_sanitize_address unsigned long -do_csum_common(const unsigned long *ptr, const unsigned long *end, - unsigned long data) +static inline __no_sanitize_address xlen_t +do_csum_common(const xlen_t *ptr, const xlen_t *end, + xlen_t data) { unsigned int shift; - unsigned long csum = 0, carry = 0; + xlen_t csum = 0, carry = 0; /* * Do 32-bit reads on RV32 and 64-bit reads otherwise. This should be @@ -130,8 +130,8 @@ static inline __no_sanitize_address unsigned int do_csum_with_alignment(const unsigned char *buff, int len) { unsigned int offset, shift; - unsigned long csum, data; - const unsigned long *ptr, *end; + xlen_t csum, data; + const xlen_t *ptr, *end; /* * Align address to closest word (double word on rv64) that comes before @@ -140,7 +140,7 @@ do_csum_with_alignment(const unsigned char *buff, int len) */ offset = (unsigned long)buff & OFFSET_MASK; kasan_check_read(buff, len); - ptr = (const unsigned long *)(buff - offset); + ptr = (const xlen_t *)(buff - offset); /* * Clear the most significant bytes that were over-read if buff was not @@ -153,7 +153,7 @@ do_csum_with_alignment(const unsigned char *buff, int len) #else data = (data << shift) >> shift; #endif - end = (const unsigned long *)(buff + len); + end = (const xlen_t *)(buff + len); csum = do_csum_common(ptr, end, data); #ifdef CC_HAS_ASM_GOTO_TIED_OUTPUT @@ -163,7 +163,7 @@ do_csum_with_alignment(const unsigned char *buff, int len) */ if (IS_ENABLED(CONFIG_RISCV_ISA_ZBB) && IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) { - unsigned long fold_temp; + xlen_t fold_temp; /* * Zbb is likely available when the kernel is compiled with Zbb @@ -233,15 +233,15 @@ do_csum_with_alignment(const unsigned char *buff, int len) static inline __no_sanitize_address unsigned int do_csum_no_alignment(const unsigned char *buff, int len) { - unsigned long csum, data; - const unsigned long *ptr, *end; + xlen_t csum, data; + const xlen_t *ptr, *end; - ptr = (const unsigned long *)(buff); + ptr = (const xlen_t *)(buff); data = *(ptr++); kasan_check_read(buff, len); - end = (const unsigned long *)(buff + len); + end = (const xlen_t *)(buff + len); csum = do_csum_common(ptr, end, data); /* @@ -250,7 +250,7 @@ do_csum_no_alignment(const unsigned char *buff, int len) */ if (IS_ENABLED(CONFIG_RISCV_ISA_ZBB) && IS_ENABLED(CONFIG_RISCV_ALTERNATIVE)) { - unsigned long fold_temp; + xlen_t fold_temp; /* * Zbb is likely available when the kernel is compiled with Zbb From patchwork Tue Mar 25 12:15:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876233 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98A7D1531C5; Tue, 25 Mar 2025 12:18:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905123; cv=none; b=KFaCZjXEfLRbq9Ba56FDJipm1n+VG/dmWTc9LIHcQlJlG4st9kxyaXMU8GrJiFySYQGgSb4kybH+spCMKDKD4Z5OvRRb0mXPosVfNePKCmTvY2T6/1hEtu1v0QglzDuNU4CoTcTOWpZ5Od/rCaF6ozYaCsqcuWtCMrlbYvuIP8E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905123; c=relaxed/simple; bh=IoEFeh+m75KDxz7bIm+5+JPKqFD60PyKrITdPvnkO1Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=awooEMC4V9YZTwcaVbas+w/ybRvdxEvo2K32qBX44L2Nl70IeYKqJrk7x5o5ooaNVwXTePl7PEKixFzr5+iOARLn33cgVbE78TP+nepMqEeEHZLtD2QBjXqtnOtwwUZPSpaKXjsIbYhgItbGIhkDshg6pQZUi0VyJigfTo4bgaE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lucl4scH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lucl4scH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2D25C4CEED; Tue, 25 Mar 2025 12:18:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905123; bh=IoEFeh+m75KDxz7bIm+5+JPKqFD60PyKrITdPvnkO1Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lucl4scHiNV8o0GAc3IKkqwvULaoWzPmEp5Cf3HWn54r4yb4FO0Wcun579c2vfuZz J8cjJa3GTTzkMWx5VdBC4BoSRe3Rj4eXqqI4hKBSbaoTMZ+l2q1jnuJGFbX5wp6j2P m4aeZBNLUPltdt6Ih13MGR6Div/iKVhSIZIxo4dWkQp8ORKrVGeC8JOoffP8TQadpr bZFHGx2/54BhY+aa5gsFpFKEFifG6AH1a8qy5WxVap/Iz20XwOxa0fOKpKS0lOBu99 ve/Glkh5gV2+3Jeg8PA+Vog5uAShqNQ+riLO4RMDzBggckY3glJExTgraHapieY6+k 4nOhr50cYNVQA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 08/43] rv64ilp32_abi: riscv: bitops: Adapt ctzw & clzw of zbb extension Date: Tue, 25 Mar 2025 08:15:49 -0400 Message-Id: <20250325121624.523258-9-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on 64-bit ISA, but BITS_PER_LONG is 32. Use ctzw and clzw for int and long types instead of ctz and clz. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/bitops.h | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/arch/riscv/include/asm/bitops.h b/arch/riscv/include/asm/bitops.h index c6bd3d8354a9..d041b9e3ba84 100644 --- a/arch/riscv/include/asm/bitops.h +++ b/arch/riscv/include/asm/bitops.h @@ -35,14 +35,27 @@ #include #include -#if (BITS_PER_LONG == 64) +#if (__riscv_xlen == 64) #define CTZW "ctzw " #define CLZW "clzw " + +#if (BITS_PER_LONG == 64) +#define CTZ "ctz " +#define CLZ "clz " #elif (BITS_PER_LONG == 32) +#define CTZ "ctzw " +#define CLZ "clzw " +#else +#error "Unexpected BITS_PER_LONG" +#endif + +#elif (__riscv_xlen == 32) #define CTZW "ctz " #define CLZW "clz " +#define CTZ "ctz " +#define CLZ "clz " #else -#error "Unexpected BITS_PER_LONG" +#error "Unexpected __riscv_xlen" #endif static __always_inline unsigned long variable__ffs(unsigned long word) @@ -53,7 +66,7 @@ static __always_inline unsigned long variable__ffs(unsigned long word) asm volatile (".option push\n" ".option arch,+zbb\n" - "ctz %0, %1\n" + CTZ "%0, %1\n" ".option pop\n" : "=r" (word) : "r" (word) :); @@ -82,7 +95,7 @@ static __always_inline unsigned long variable__fls(unsigned long word) asm volatile (".option push\n" ".option arch,+zbb\n" - "clz %0, %1\n" + CLZ "%0, %1\n" ".option pop\n" : "=r" (word) : "r" (word) :); From patchwork Tue Mar 25 12:15:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876232 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7F382580E1; Tue, 25 Mar 2025 12:19:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905152; cv=none; b=hm5thc+6llBEnYvDKg8SciB1j3lRC6uAkYPdUKXBTsCEbWpc/TxTPOeiyFBaC+lbKwWpWoABOBsNIw8DtLOWmS3CLvEhiSW4e4zeH4gLco4Tw6mKqUluIXLcFJhzHebhNK2VxbrxjF8gYdViPIg6UFbG2No7CizG1l+IfNbwObE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905152; c=relaxed/simple; bh=PoJfUz6fT96JeSEjLBp9COIPADXcL2k0zITHh5nATsw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=UMOcLCuPfdDC8a+RVYXp/HthF+vB/7sok9FWK7LVtiPPMcO/Z4+pPap6+2Mz0tyrAYktYc6RDp9HkNE1qlx/xxchEm9DQioBz7HQNvMk1EuBzu/XUtJpgPNib5XWOsyzLOyX/j7UwMVSDKO0ujoBd7lhq8I7NMOq7VA5TYAHur0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=idrJWqFJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="idrJWqFJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80920C4CEED; Tue, 25 Mar 2025 12:18:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905151; bh=PoJfUz6fT96JeSEjLBp9COIPADXcL2k0zITHh5nATsw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=idrJWqFJvuxgW12tmUJ11d1s4uc7k7PpbAbB+aU99OQWXa8XQFwfpZR+CB2M6ows7 Bz7BKgIF7aIQFg1EH6+6k9nvQ4/LIXOrz/dv06mqIlwL6pxot5D2kv43LU9xJWOJzn bHh+KnkpPFol8tYdehZj0xxoJoT5gFWoHWFPvMSPjzMlXWP4nj3z6rtUCxGDtyVqFK wOG14MRBpR1bBnJkwf2yaxxZEEY6ZbpvA2VfbgC0LfqqXSOCNlw5XHFEghaI39aEcK y+5moRsvfS2RFZv/6u6rfqAAETpyO09KDMkz7jZWqmtTOZfS1qXlXxnvTZYH3YblLN kreGeQ3duF5RA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 10/43] rv64ilp32_abi: riscv: Update SATP.MODE.ASID width Date: Tue, 25 Mar 2025 08:15:51 -0400 Message-Id: <20250325121624.523258-11-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV32 employs 9-bit asid_bits due to CSR's xlen=32 constraint, whereas RV64ILP32 ABI, rooted in RV64 ISA, features a 64-bit satp CSR. Hence, for rv64ilp32 abi, the exact asid mechanism as in 64-bit architecture is adopted. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/mm/context.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/arch/riscv/mm/context.c b/arch/riscv/mm/context.c index 4abe3de23225..c3f9926d9337 100644 --- a/arch/riscv/mm/context.c +++ b/arch/riscv/mm/context.c @@ -226,14 +226,18 @@ static inline void set_mm(struct mm_struct *prev, static int __init asids_init(void) { - unsigned long asid_bits, old; + xlen_t asid_bits, old; /* Figure-out number of ASID bits in HW */ old = csr_read(CSR_SATP); asid_bits = old | (SATP_ASID_MASK << SATP_ASID_SHIFT); csr_write(CSR_SATP, asid_bits); asid_bits = (csr_read(CSR_SATP) >> SATP_ASID_SHIFT) & SATP_ASID_MASK; - asid_bits = fls_long(asid_bits); +#if __riscv_xlen == 64 + asid_bits = fls64(asid_bits); +#else + asid_bits = fls(asid_bits); +#endif csr_write(CSR_SATP, old); /* @@ -265,9 +269,9 @@ static int __init asids_init(void) static_branch_enable(&use_asid_allocator); pr_info("ASID allocator using %lu bits (%lu entries)\n", - asid_bits, num_asids); + (ulong)asid_bits, num_asids); } else { - pr_info("ASID allocator disabled (%lu bits)\n", asid_bits); + pr_info("ASID allocator disabled (%lu bits)\n", (ulong)asid_bits); } return 0; From patchwork Tue Mar 25 12:15:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876231 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EF3112571DC; Tue, 25 Mar 2025 12:19:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905179; cv=none; b=FYNrDUpw1Wrp5Ajk6V99vPXthqcfPEpOjZxyACviPw41TjDffkwC/MAXZ4zkTSiobA0zbi6UzCZ1lIWtzdjb9oNh4Ucf7BUPLBmUldb6zCZ35ZsbGeXH1KixzNz/y9UzfGhPswb0PK3yJwAZW/TrH8OcMfo8/zHpXGQAgmIq5VM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905179; c=relaxed/simple; bh=ISWjhncK1TCy0/aI9ckRc78SEAPhWPAPLhDyLEaKeFw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JvgsKFGcEhyjuxpVFzml/Q17slAQ53ke1Nz484I89Jw6eb88x8YE3MPuH358Jo9QxNiRG3cwKEJ26UGaKEQb3G9tq1ZTZAAkuuFib7ahbuBh5MKKFrTBO3QHZ7ThUSr+G3W3Ad+rEt1kKhp9Khs2pFKiLOoo6Z3xctdLzzdUbRo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JljTIfo5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JljTIfo5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9525CC4CEE4; Tue, 25 Mar 2025 12:19:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905178; bh=ISWjhncK1TCy0/aI9ckRc78SEAPhWPAPLhDyLEaKeFw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JljTIfo5gVHDwGhEgHhBruSFSNYbpMWvToA3rfmH1hgjQuNmceDaFgEXe/3gGSZTH exytkNjTyCOkyjtIhKFOb0XGBYJ/tpWrvx6cqLa0p1rcK+JqHxqzeKhqUHGTeZz3P1 268w16IQAohPKFTX+GCAMPr0BXefchXZAvm7xNqRcO9DHHxQ6x1Rv1VAQxtWYoauze 0NzZoWSBD1Kbjn5BxOB0hY0NfP+8euHRcxIFeOdx1yFNa3b1LyV/a54VkoW3kpUbNb g/Y6OJcrFuELv1f7GjT/hSs+HZj8euBQvkCNVQ43y1CvxpAwAHEzaLOslayxcIa/XB KKnS677yv+Sgw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 12/43] rv64ilp32_abi: riscv: Introduce cmpxchg_double Date: Tue, 25 Mar 2025 08:15:53 -0400 Message-Id: <20250325121624.523258-13-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 abi has the ability to exclusively load and store (ld/sd) a pair of words from an address. Then the SLUB can take advantage of a cmpxchg_double implementation to avoid taking some locks. This patch provides an implementation of cmpxchg_double for 32-bit pairs, and activates the logic required for the SLUB to use these functions (HAVE_ALIGNED_STRUCT_PAGE and HAVE_CMPXCHG_DOUBLE). Inspired from the commit: 5284e1b4bc8a ("arm64: xchg: Implement cmpxchg_double") Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/Kconfig | 1 + arch/riscv/include/asm/cmpxchg.h | 53 ++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index da2111b0111c..884235cf4092 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -141,6 +141,7 @@ config RISCV select HAVE_ARCH_USERFAULTFD_MINOR if 64BIT && USERFAULTFD select HAVE_ARCH_VMAP_STACK if MMU && 64BIT select HAVE_ASM_MODVERSIONS + select HAVE_CMPXCHG_DOUBLE if ABI_RV64ILP32 select HAVE_CONTEXT_TRACKING_USER select HAVE_DEBUG_KMEMLEAK select HAVE_DMA_CONTIGUOUS if MMU diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h index 938d50194dba..944f6d825f78 100644 --- a/arch/riscv/include/asm/cmpxchg.h +++ b/arch/riscv/include/asm/cmpxchg.h @@ -7,6 +7,7 @@ #define _ASM_RISCV_CMPXCHG_H #include +#include #include #include @@ -409,6 +410,58 @@ static __always_inline void __cmpwait(volatile void *ptr, #define __cmpwait_relaxed(ptr, val) \ __cmpwait((ptr), (unsigned long)(val), sizeof(*(ptr))) + +#ifdef CONFIG_HAVE_CMPXCHG_DOUBLE +#define system_has_cmpxchg_double() 1 + +#define __cmpxchg_double_check(ptr1, ptr2) \ +({ \ + if (sizeof(*(ptr1)) != 4) \ + BUILD_BUG(); \ + if (sizeof(*(ptr2)) != 4) \ + BUILD_BUG(); \ + VM_BUG_ON((ulong *)(ptr2) - (ulong *)(ptr1) != 1); \ + VM_BUG_ON(((ulong)ptr1 & 0x7) != 0); \ +}) + +#define __cmpxchg_double(old1, old2, new1, new2, ptr) \ +({ \ + __typeof__(ptr) __ptr = (ptr); \ + register unsigned int __ret; \ + u64 __old; \ + u64 __new; \ + u64 __tmp; \ + switch (sizeof(*(ptr))) { \ + case 4: \ + __old = ((u64)old2 << 32) | (u64)old1; \ + __new = ((u64)new2 << 32) | (u64)new1; \ + __asm__ __volatile__ ( \ + "0: lr.d %0, %2\n" \ + " bne %0, %z3, 1f\n" \ + " sc.d %1, %z4, %2\n" \ + " bnez %1, 0b\n" \ + "1:\n" \ + : "=&r" (__tmp), "=&r" (__ret), "+A" (*__ptr) \ + : "rJ" (__old), "rJ" (__new) \ + : "memory"); \ + __ret = (__old == __tmp); \ + break; \ + default: \ + BUILD_BUG(); \ + } \ + __ret; \ +}) + +#define arch_cmpxchg_double(ptr1, ptr2, o1, o2, n1, n2) \ +({ \ + int __ret; \ + __cmpxchg_double_check(ptr1, ptr2); \ + __ret = __cmpxchg_double((ulong)(o1), (ulong)(o2), \ + (ulong)(n1), (ulong)(n2), \ + ptr1); \ + __ret; \ +}) +#endif #endif #endif /* _ASM_RISCV_CMPXCHG_H */ From patchwork Tue Mar 25 12:15:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876230 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD51E25A33E; Tue, 25 Mar 2025 12:20:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905206; cv=none; b=FO0PeYUj9mjI3XtLL6EhdgFMKI4Bl2lYHYrOVB7XgWKXg2ISuc0flfpA4uNxiPsSzm3lnmhREEyUyKisfXnTvFobCbs6V56DX8Vt4QUzenfLlHiGa/4sRUVVrgXLQyEl4VnAs92V4Ngxn7kg9TLooPA0I4/utOuZQhPm8l+x1ws= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905206; c=relaxed/simple; bh=S5onwW8Rhyiou8sUliJC71k2++Mi5WmKeWwnQHNXJR4=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nXNg+hccrfbi5znxnNMx3O7P6GmaBFUO4T1wLGqNL8gGhXq9MLH8L9op6nJ7RGrFeiSuBcLNcYxFDfc9wrz8qoG/DYZWkCJIuwPdxFCVlti8z8qu3qCq2ksjzcjHc+NCXyEnDMhZYUvcTYDwfas0ISkLLqsmhWokv4v027dcDMc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=SLdkNpDM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="SLdkNpDM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4631C4CEED; Tue, 25 Mar 2025 12:19:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905206; bh=S5onwW8Rhyiou8sUliJC71k2++Mi5WmKeWwnQHNXJR4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=SLdkNpDMElfs6JtH93HL2qjB8ooc8njsv/pKcwCApdLyBmrRvX5jPuX625CrCL3KO LLg2i7GO2jiDdmvOrZJHbXmmZ//hxLsMMV2R487NeFcTSsmqjUedkLmuzv7hLlERU6 kiuccu5qk3IkbjYpbkj/s/pT/RPFgjJK5adv7y6A6BEEgO8BBTscDUHE/YjxJcfAd0 ppo9vCiFRV5Tz05przlyIDyBiWv9hSmTIygPLI/aoXi+ZELjeBp9iCTv0eEu9tP+Tz vHMjNE6CxPdHgma3URda2HPkGEEkWTjPcCLQMXFENwL53LrB+OGroHLD8Txjexf9jd 7/iF+KyP6XFLQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 14/43] rv64ilp32_abi: riscv: Adapt kernel module code Date: Tue, 25 Mar 2025 08:15:55 -0400 Message-Id: <20250325121624.523258-15-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Because riscv_insn_valid_32bit_offset is always true for ILP32, use BITS_PER_LONG instead of CONFIG_64BIT. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/kernel/module.c | 2 +- include/asm-generic/module.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c index 47d0ebeec93c..d7360878e618 100644 --- a/arch/riscv/kernel/module.c +++ b/arch/riscv/kernel/module.c @@ -45,7 +45,7 @@ struct relocation_handlers { */ static bool riscv_insn_valid_32bit_offset(ptrdiff_t val) { -#ifdef CONFIG_32BIT +#if BITS_PER_LONG == 32 return true; #else return (-(1L << 31) - (1L << 11)) <= val && val < ((1L << 31) - (1L << 11)); diff --git a/include/asm-generic/module.h b/include/asm-generic/module.h index 98e1541b72b7..f870171b14a8 100644 --- a/include/asm-generic/module.h +++ b/include/asm-generic/module.h @@ -12,7 +12,7 @@ struct mod_arch_specific }; #endif -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 #define Elf_Shdr Elf64_Shdr #define Elf_Phdr Elf64_Phdr #define Elf_Sym Elf64_Sym From patchwork Tue Mar 25 12:15:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876229 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C99A525A654; Tue, 25 Mar 2025 12:20:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905249; cv=none; b=PXA5/9Qb0S+RV8JZtCUOA/lBhL14/s2dk44d2hCQerhzIw5o/KQjCgVPjBaqvwa1/CSMRnHG3o6fy7wfjsR8YEoJZQS5k1UGO+YPhOFoyjbPzzqkGszC2Bi7TBfL1ugeq4jSg5auQvPPD4LMXUzicrv8PwSmdPT0jq/ckFsrf1A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905249; c=relaxed/simple; bh=j18oLXPHFjnaGmgCwscTDaPTneiSyVqaoCKba61TqXg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ksvMfmY3i/W6NiLN+klluOkriT32EGq0noZ3BusNVu2GF1AiyMz1e9hsraXsCjKqLoktGK0HuqvGxV7Chie2WyAg0trLkEr7HRdWJXG0YdGkmxxNSw4WqETQnjLPn6jZ97gwQJPhPeJNGOogJavJ9OO67radIxFoZ1IA8nnwXfk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Oz8iARGb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Oz8iARGb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 213B1C4CEE4; Tue, 25 Mar 2025 12:20:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905248; bh=j18oLXPHFjnaGmgCwscTDaPTneiSyVqaoCKba61TqXg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Oz8iARGb8tr7htUnaAeUCJ6IYWmPD2sUy6Fahrj1eenZT1kKMM+eFWJiKMQdGqcqw rqWz1qpr3jo3lVxFFYwrBasJ+tHw3PWT4Rv+apRvPWQa/i4K/2vaflC0hzKnPuX+P4 8bTM1EjGGpOTuQHdC8TBFwQkrFBcJI49u7thNpghWlOyMzJ3hMkq6mroidX2glUScJ uyG/XkFYIxn9NAsEUQSTt4TYvFJgoGc/h+TFjpPprvl74n7fsNLu4FNEPQT7I99tBn keJQj/Yhn5p3/N2C2J6YITPpmyRqNzvFWJV+jSwmuHsx+jq6dfD8Ek+CQ8ONbZxS8/ mQmFdBGqaAdYQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 17/43] rv64ilp32_abi: riscv: Adapt kasan memory layout Date: Tue, 25 Mar 2025 08:15:58 -0400 Message-Id: <20250325121624.523258-18-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" For generic KASAN, the size of each memory granule is 8, which needs 1/8 address space. The kernel space is 2GiB in rv64ilp32, so we need 256MiB range (0x80000000 ~ 0x90000000), and the offset is 0x7000000 for the whole 4GiB address space. Virtual kernel memory layout: fixmap : 0x90a00000 - 0x90ffffff (6144 kB) pci io : 0x91000000 - 0x91ffffff ( 16 MB) vmemmap : 0x92000000 - 0x93ffffff ( 32 MB) vmalloc : 0x94000000 - 0xb3ffffff ( 512 MB) modules : 0xb4000000 - 0xb7ffffff ( 64 MB) lowmem : 0xc0000000 - 0xc7ffffff ( 128 MB) kasan : 0x80000000 - 0x8fffffff ( 256 MB) <= kernel : 0xb8000000 - 0xbfffffff ( 128 MB) Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/asm/kasan.h | 6 +++++- arch/riscv/mm/kasan_init.c | 2 +- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/riscv/include/asm/kasan.h b/arch/riscv/include/asm/kasan.h index e6a0071bdb56..dd3a211bc5d0 100644 --- a/arch/riscv/include/asm/kasan.h +++ b/arch/riscv/include/asm/kasan.h @@ -21,7 +21,7 @@ * [KASAN_SHADOW_OFFSET, KASAN_SHADOW_END) cover all 64-bits of virtual * addresses. So KASAN_SHADOW_OFFSET should satisfy the following equation: * KASAN_SHADOW_OFFSET = KASAN_SHADOW_END - - * (1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT)) + * (1ULL << (BITS_PER_LONG - KASAN_SHADOW_SCALE_SHIFT)) */ #define KASAN_SHADOW_SCALE_SHIFT 3 @@ -31,7 +31,11 @@ * aligned on PGDIR_SIZE, so force its alignment to ease its population. */ #define KASAN_SHADOW_START ((KASAN_SHADOW_END - KASAN_SHADOW_SIZE) & PGDIR_MASK) +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +#define KASAN_SHADOW_END 0x90000000UL +#else #define KASAN_SHADOW_END MODULES_LOWEST_VADDR +#endif #ifdef CONFIG_KASAN #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index 41c635d6aca4..1e864598779a 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -324,7 +324,7 @@ asmlinkage void __init kasan_early_init(void) uintptr_t i; BUILD_BUG_ON(KASAN_SHADOW_OFFSET != - KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT))); + KASAN_SHADOW_END - (1UL << (BITS_PER_LONG - KASAN_SHADOW_SCALE_SHIFT))); for (i = 0; i < PTRS_PER_PTE; ++i) set_pte(kasan_early_shadow_pte + i, From patchwork Tue Mar 25 12:16:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876228 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A62432586C5; Tue, 25 Mar 2025 12:21:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905278; cv=none; b=mEr8USxEYxOrCxyaPiza+sxjz3fT4aQEw5VekROrjf+YJF1RPSLmuKR+OH4/mBshZVnd7kJhRZklrPn+Sy0eDVPvKXI4iExQiyU8qd3xaVo9XKDxOtsosyIAMHjtcnX9x7XK8qKbGC92LrwDD+HL/Zn+9QV8vbpjR1B59V+iqcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905278; c=relaxed/simple; bh=2UWsq5ym5VI7/vPr12wsi4xJfQOhojxJV/AkLgh/hzg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fhizAkYnN+iL3ekWGxKB8XbM0ZhdzYRkD34vg280Q/B1zQL6q4jnPX77CzziIE++LVW0cd5tw+1zlJqmXE/r6+D6qDoew730caks0FoJ/GHWW++A5cXVkdA3Tv6fkMZMum8rMnJHa/loZ+h8Et8S+q3662V2a7VlfyZdk1f+wAY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CASldfZK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CASldfZK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE0C8C4CEED; Tue, 25 Mar 2025 12:21:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905277; bh=2UWsq5ym5VI7/vPr12wsi4xJfQOhojxJV/AkLgh/hzg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CASldfZKFy6JxMfog2czdRUW400bjKu3qkIqf2g0bO4mAfdCAZJOHjQKj39cz8aqg XG1civv4o3gD3MK48Yhqik6WW027RRpl8DQvIB9Yob7C9kX6E9qV9uYudT8hhev5cL QTVrUoI6O6nkCMoTVAOPFZqdY9GzjntjKQO6ayze5oXkt5L0UWJbyNb4+aKdlWWzz3 3BCk6+WJF49ZitzC7I5AW48N1eiiEebNx/f6MnTwAH2grSuKMLPrEypZXRY8Sm16qK mLSvyciL4KYAbVkJERD8dZPZNRmSFb8RH3/1NqleyW9uvgrIwTuaYlC1t3hgHEcd/Z SqwIoUbHe8uUw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 19/43] rv64ilp32_abi: irqchip: irq-riscv-intc: Use xlen_t instead of ulong Date: Tue, 25 Mar 2025 08:16:00 -0400 Message-Id: <20250325121624.523258-20-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on CONFIG_64BIT, so use xlen/xlen_t instead of BITS_PER_LONG/ulong. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/irqchip/irq-riscv-intc.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c index f653c13de62b..4fc7d5704acf 100644 --- a/drivers/irqchip/irq-riscv-intc.c +++ b/drivers/irqchip/irq-riscv-intc.c @@ -20,18 +20,19 @@ #include #include +#include static struct irq_domain *intc_domain; -static unsigned int riscv_intc_nr_irqs __ro_after_init = BITS_PER_LONG; -static unsigned int riscv_intc_custom_base __ro_after_init = BITS_PER_LONG; +static unsigned int riscv_intc_nr_irqs __ro_after_init = __riscv_xlen; +static unsigned int riscv_intc_custom_base __ro_after_init = __riscv_xlen; static unsigned int riscv_intc_custom_nr_irqs __ro_after_init; static void riscv_intc_irq(struct pt_regs *regs) { - unsigned long cause = regs->cause & ~CAUSE_IRQ_FLAG; + xlen_t cause = regs->cause & ~CAUSE_IRQ_FLAG; if (generic_handle_domain_irq(intc_domain, cause)) - pr_warn_ratelimited("Failed to handle interrupt (cause: %ld)\n", cause); + pr_warn_ratelimited("Failed to handle interrupt (cause: " REG_FMT ")\n", cause); } static void riscv_intc_aia_irq(struct pt_regs *regs) From patchwork Tue Mar 25 12:16:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876227 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46C0625C6EA; Tue, 25 Mar 2025 12:21:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905307; cv=none; b=MABosUlkCPqRlUQsblXKz01FcQNIQG69EdgsnthhJudVm7SYLHnzwjYQovKuoe1j2WcMCZDJP0/BZhYsO+27E8zLB6u8IruBiZpGn5hxMmy3rcdS7yYz5ivwQuwX4BtzLW3xC7x9VXsCxZCPJ/B9qDehioJRISHODH22gAVvoa8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905307; c=relaxed/simple; bh=CZdS/GvkA90YU/kEAB36wBveXCqjeNawAvE607HO4+w=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=scYlMC+YY0HEBJMCbFygnmXvnD1YYhxzoyYz0ZLifnqfq0PPA0JxyLfp7kTrCSA0d3gfwQfJrSrq70Sq9NQIZtWSiazi2CB1Xv/YajWf5zkaAoQwD296iCPJ+lzhHK9ypE4xjEJZqYT7EOO/woj7F/8UJr1D8SMN7qqDgvu9z74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Yclr6uQ+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Yclr6uQ+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CBB6BC4CEEE; Tue, 25 Mar 2025 12:21:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905306; bh=CZdS/GvkA90YU/kEAB36wBveXCqjeNawAvE607HO4+w=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Yclr6uQ+H+fwk4FeZuO/o1L+d44Ijt8+G1fy933XmRQHqYsRSsd0sTScvUWf8jhFn posU+nxR0I0CJubB8JpreHpWy/aH5nvKFU2hJMpWF6yA979un6JU7n1rYErPHMB+OE 1HtLAfDLBJWL5lUcfbf0A+xaNH26BIwc5uqCly6rbYZnlIoX+horQoW9v4j5Sryd9B XXw3PX/jNNQJEowJrGs9ycLTX55S8tjuI6nqYWKt5HT846Iutyoxz4uQ/nads2ZfRN d2AZWWzbzkjb2I7at8nkI5Hq2azsuTC7jh77lOOfoH2dxmDMXNH64ns6YpTd/KRsOD WXp0YdnLwIufw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 21/43] rv64ilp32_abi: asm-generic: Add custom BITS_PER_LONG definition Date: Tue, 25 Mar 2025 08:16:02 -0400 Message-Id: <20250325121624.523258-22-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, but BITS_PER_LONG is 32. So, give a custom architectural definition of BITS_PER_LONG to match the correct macro definition. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- arch/riscv/include/uapi/asm/bitsperlong.h | 6 ++++++ include/asm-generic/bitsperlong.h | 2 ++ 2 files changed, 8 insertions(+) diff --git a/arch/riscv/include/uapi/asm/bitsperlong.h b/arch/riscv/include/uapi/asm/bitsperlong.h index 7d0b32e3b701..fec2ad91597c 100644 --- a/arch/riscv/include/uapi/asm/bitsperlong.h +++ b/arch/riscv/include/uapi/asm/bitsperlong.h @@ -9,6 +9,12 @@ #define __BITS_PER_LONG (__SIZEOF_POINTER__ * 8) +#if __BITS_PER_LONG == 64 +#define BITS_PER_LONG 64 +#else +#define BITS_PER_LONG 32 +#endif + #include #endif /* _UAPI_ASM_RISCV_BITSPERLONG_H */ diff --git a/include/asm-generic/bitsperlong.h b/include/asm-generic/bitsperlong.h index 1023e2a4bd37..7ccbb7ce6610 100644 --- a/include/asm-generic/bitsperlong.h +++ b/include/asm-generic/bitsperlong.h @@ -6,7 +6,9 @@ #ifdef CONFIG_64BIT +#ifndef BITS_PER_LONG #define BITS_PER_LONG 64 +#endif #else #define BITS_PER_LONG 32 #endif /* CONFIG_64BIT */ From patchwork Tue Mar 25 12:16:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876226 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DC76B257AC7; Tue, 25 Mar 2025 12:22:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905339; cv=none; b=tYhfYU0SXsT4a/io4e7LbhrdrYW4pX0QhPx3oY1HfAm7Bz3dQgSWL+3srsivDJhtZLBiruOQ7nH8zBopVGAavIcc+F33486aPOwFuhZSjPWwqcTvbt99MrESNKUt7qTHO9asU45mVFZt+t5K3KTnIRu2B1NWg6qLRyDrSAVBCCw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905339; c=relaxed/simple; bh=Gba3zEmCGRsT82kSH4PvBGfFD1Y0z5mGbjQJQ/8lKIw=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=CJfc3yZIkIkb07ftnX+Cinr/5xAhRtom6BuQIBS6LNMu6VzWtaRlnDbhCdlW8KbxlgSScC1sumBto/3hihifXouJXyBY12sZ9O24T4x6MtEgjSvDkIBRkV3o4QVH7OuYsIbFkYieVjT8UrzmnIEnAdpZTky/Gm1A5n4XBVgghv8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YKbe7u9X; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YKbe7u9X" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6BE58C4CEED; Tue, 25 Mar 2025 12:22:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905335; bh=Gba3zEmCGRsT82kSH4PvBGfFD1Y0z5mGbjQJQ/8lKIw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YKbe7u9Xvh8isSihGt6kXF+4MLeubRmVXpDt87uyRN/SP7UllOiStMyrtlJQjIfsu 1Hqtk5naMe0UheGZS80Sfa0zl11d6cST2bQbzQbQP6zp3JTlEhxFZ1+erShyWgpVv5 m4OgxrFrdZWkRi8e5adLBRWw11xX6PZl4Esqh79r65lvkXVdx7361AQY5CaHF/wGsB E83s0iVJpTljmHXSCPSP2d0uzzX5JuGLn7J9OB6gQRtgkhW2o+6R7WFKdYQYdMxY96 tsyLis/6rOdw68n8FLG2k9n+ahfkJr3B2G35UK8EUKDj+I6/B4BHklYbO/OqqvREWx amTO7iSypzjUw== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 23/43] rv64ilp32_abi: compat: Correct compat_ulong_t cast Date: Tue, 25 Mar 2025 08:16:04 -0400 Message-Id: <20250325121624.523258-24-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI systems have BITS_PER_LONG set to 32, matching sizeof(compat_ulong_t). Adjust code involving compat_ulong_t accordingly. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/uapi/linux/auto_fs.h | 6 ++++++ kernel/compat.c | 15 ++++++++++++--- 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/include/uapi/linux/auto_fs.h b/include/uapi/linux/auto_fs.h index 8081df849743..7d925ee810b6 100644 --- a/include/uapi/linux/auto_fs.h +++ b/include/uapi/linux/auto_fs.h @@ -80,9 +80,15 @@ enum { #define AUTOFS_IOC_SETTIMEOUT32 _IOWR(AUTOFS_IOCTL, \ AUTOFS_IOC_SETTIMEOUT_CMD, \ compat_ulong_t) +#if __riscv_xlen == 64 +#define AUTOFS_IOC_SETTIMEOUT _IOWR(AUTOFS_IOCTL, \ + AUTOFS_IOC_SETTIMEOUT_CMD, \ + unsigned long long) +#else #define AUTOFS_IOC_SETTIMEOUT _IOWR(AUTOFS_IOCTL, \ AUTOFS_IOC_SETTIMEOUT_CMD, \ unsigned long) +#endif #define AUTOFS_IOC_EXPIRE _IOR(AUTOFS_IOCTL, \ AUTOFS_IOC_EXPIRE_CMD, \ struct autofs_packet_expire) diff --git a/kernel/compat.c b/kernel/compat.c index fb50f29d9b36..46ffdc5e7cc4 100644 --- a/kernel/compat.c +++ b/kernel/compat.c @@ -203,11 +203,17 @@ long compat_get_bitmap(unsigned long *mask, const compat_ulong_t __user *umask, return -EFAULT; while (nr_compat_longs > 1) { - compat_ulong_t l1, l2; + compat_ulong_t l1; unsafe_get_user(l1, umask++, Efault); + nr_compat_longs -= 1; +#if BITS_PER_LONG == 64 + compat_ulong_t l2; unsafe_get_user(l2, umask++, Efault); *mask++ = ((unsigned long)l2 << BITS_PER_COMPAT_LONG) | l1; - nr_compat_longs -= 2; + nr_compat_longs -= 1; +#else + *mask++ = l1; +#endif } if (nr_compat_longs) unsafe_get_user(*mask, umask++, Efault); @@ -234,8 +240,11 @@ long compat_put_bitmap(compat_ulong_t __user *umask, unsigned long *mask, while (nr_compat_longs > 1) { unsigned long m = *mask++; unsafe_put_user((compat_ulong_t)m, umask++, Efault); + nr_compat_longs -= 1; +#if BITS_PER_LONG == 64 unsafe_put_user(m >> BITS_PER_COMPAT_LONG, umask++, Efault); - nr_compat_longs -= 2; + nr_compat_longs -= 1; +#endif } if (nr_compat_longs) unsafe_put_user((compat_ulong_t)*mask, umask++, Efault); From patchwork Tue Mar 25 12:16:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876225 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C141C25D8F7; Tue, 25 Mar 2025 12:22:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905364; cv=none; b=O5iLTxfcEvGc7WLje7XHMSkWVcQAxRDCzzO24ZIb+ZWIJ4sPo+gGiW1WS+W6G7FQE5C7ZeRJ1abLlvSXq4LMpzhjIgRRqiTWO3THaFc2365anEGogGWFVkkKUIFiF6XNSA44HO7MeZR4mdkXrLeMqZlNV3DOuLZ+8O2mcLtc1yE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905364; c=relaxed/simple; bh=pRtaUMiFeAu+95a3CJwIpLTg/ZslxDoQ016dR5ldst0=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=H2HKm6OcOHwOglAcARoacMwd/Xp5LJfL+ZTDpOoLRCVCBxnLJj4tPqopfcRmXTFl17tSLUzkSfKEgijapwSYbVv3IiW5n4m4rMoF8kNKt6Df/5011bUNODgNmduibrKArfiQsRfUSQSV3m3lO3PwRPg+pjjTPGGmytT7vvsZ3mo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=p2sof5de; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="p2sof5de" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ED6B6C4CEE9; Tue, 25 Mar 2025 12:22:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905363; bh=pRtaUMiFeAu+95a3CJwIpLTg/ZslxDoQ016dR5ldst0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p2sof5deaUnKxXsdbuejLa7cMfsQ/jfo9NR4yzxltIbGt9I7LGg3viUZpXT1k8RFw 9WTRwz7YQbBmvTKk30sv6fC7V/LIHte4FG9qKAIedqe9Y/ickF0YMfyR54zTwhm5BQ obpUACKtQgLFBumRgfvC3WYEeXtPc/EwoiH4j3URzIqNlAo4TJWEvolJgwku1sqkJx 7rKaKFTW+8U8evbqHZ+v5cGHZoXKojJ2sQX8sVgxL7kldYux1laci0G+UGSLEZ94Hd tb6MX1/sRkfPX52VC52FXAgkrmJ6qByYnTyzQlY4MscJzEO+hWOnTRGT/1TDXE8C38 5yNfgh3K5oAwg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 25/43] rv64ilp32_abi: exec: Adapt 64lp64 env and argv Date: Tue, 25 Mar 2025 08:16:06 -0400 Message-Id: <20250325121624.523258-26-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 abi reuses the env and argv memory layout of the lp64 abi, so leave the space to fit the lp64 struct layout. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- fs/exec.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/exec.c b/fs/exec.c index 506cd411f4ac..548d18b7ae92 100644 --- a/fs/exec.c +++ b/fs/exec.c @@ -424,6 +424,10 @@ static const char __user *get_user_arg_ptr(struct user_arg_ptr argv, int nr) } #endif +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) + nr = nr * 2; +#endif + if (get_user(native, argv.ptr.native + nr)) return ERR_PTR(-EFAULT); From patchwork Tue Mar 25 12:16:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876224 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA94725E835; Tue, 25 Mar 2025 12:23:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905395; cv=none; b=uHsa8SXbLiSJaYASZqj+on3/tMeNZs7DV04sZTJSiZDpQWvMlz+U8duXa25d3e8YNY9pdIxroBATlbravz/+0171yhoDYSwq54EvO07u5zYev5bc7q/2py1fQqThbgNvr/VcO/eAgypVb6OGE6leErYlgiNIlQuEuiQyyrx7jjo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905395; c=relaxed/simple; bh=ZdAkreXNjU924DvNhCbpfQtujvhWVn4yr+yoUPb7XPY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=TnjyBy2nhX9ByN5KAlQ0D3xvV0kCIQ3eu0eQExDwsUCK4Uve7OYEWov6sXj2Til+YLvgoI4xAFJ6VDIoha9TruWSx+fxyIFx6ert7ffcQ7U9TYogKe92hq1c1FaeMT4djWJMuKWnN+vF9KkOu2MutCAwso3HKRezuOW3c8o9BtQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=e0wh/8Tx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="e0wh/8Tx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id EADAAC4CEED; Tue, 25 Mar 2025 12:23:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905394; bh=ZdAkreXNjU924DvNhCbpfQtujvhWVn4yr+yoUPb7XPY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e0wh/8TxUniEX/ZB9PGz8ZaGiNSQaTZGsJyssDgFsfp+2sbfBlRKcQH+lkBCu3zaB +q5JKkf/6/6G9uTXxsQGTjo+cVtnmWczJLZwsleSvMn8bvWHhxQEszuBw4x52yi9Bk RNfQry6flnLoDYkb77T9WYq6b9r6RmFWBsHS2ig0gx5/AlMuJzcHEncwWe7YEL1iNP 87fpjQaGqgvuIa+FZFiDtwpz20AgIa6I0L4yQIwx2Bw4QTCN9Co6vjT0+kAHFSHwyB wE+o43JlKf9U+bBptVGvCfBIgJ0CtZ70Z/obq8KrvHdjwDYUww471PY6i6UOaWxc62 UYzxYGeDDf+ug== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 27/43] rv64ilp32_abi: input: Adapt BITS_PER_LONG to dword Date: Tue, 25 Mar 2025 08:16:08 -0400 Message-Id: <20250325121624.523258-28-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, but BITS_PER_LONG is 32. So, adapt bits to dword with BITS_PER_LONG. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/input/input.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/input/input.c b/drivers/input/input.c index c9e3ac64bcd0..7af5e8c66f25 100644 --- a/drivers/input/input.c +++ b/drivers/input/input.c @@ -1006,7 +1006,11 @@ static int input_bits_to_string(char *buf, int buf_size, int len = 0; if (in_compat_syscall()) { +#if BITS_PER_LONG == 64 u32 dword = bits >> 32; +#else + u32 dword = bits; +#endif if (dword || !skip_empty) len += snprintf(buf, buf_size, "%x ", dword); From patchwork Tue Mar 25 12:16:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876223 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D6B5F258CE4; Tue, 25 Mar 2025 12:23:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905423; cv=none; b=f5oS/lpZG7jrApvgmApBozBIAW3qi9Rsh8439iojC8PRL2AXTV+YcfOBtj1nWfpFxJqQpWBTFwW01AtvwJGBKZFn7P+l3WvVxwqhoPLUChMLqxZLZSVLrh9mZXTE1UzgrhHwF/arft1gPgO5sbpcUs1upWPQl0c59jhhuQCI0XE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905423; c=relaxed/simple; bh=UIaebIfG0R9TIRnbdttpSpkeO01MnZQH6Hjy2o37EAI=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fzKrZFrRNVSJCWibAqqXqfg3QD54dd226rKN9Q+8kum8/3xR7vChlXV8lbrXxZkqq68ZwH7wkWwf+dyNB6JILMSt2geNStkJ8d3cqqmecvTRvr1KdUICGkZNAo+rX7b5DYCGNQwgU+SD+GwkSi2Q06Xk8ariI5Tm1Mk+ES+QWvM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Db/QgCVy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Db/QgCVy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2AABC4CEE9; Tue, 25 Mar 2025 12:23:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905422; bh=UIaebIfG0R9TIRnbdttpSpkeO01MnZQH6Hjy2o37EAI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Db/QgCVyTP1KMjCiu2milpvC3qkLDNwk3mVRrlKUi/lDDidPq6IZsGUssESgSnXHT 8tlLCgF2DLpUf5AyzRz+b6VUbZToe3eNsINg1A5sW1DeN4FI3h6hqjvjGodEGNG3Mh rXbNckGFXXUat8nF+OjELG2T7OAsRhxP5N1Ozf+PiFG4kcl8ToY6X3gRUFGuT59V6Y YP+P1HiohUl8j+H6MaYzerQ0EVmocVtWqxn3ORYHJ6nPM6nrsNgTMbiFmnMTsB/FvM VXYwY9CxVXMJeL/4Uaem46fFV73JfQLtYq07lKQaPXQyFsbq4ou++k7Q2Nf95Q6DAD 7iBflbn0swm5A== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 29/43] rv64ilp32_abi: locking/atomic: Use BITS_PER_LONG for scripts Date: Tue, 25 Mar 2025 08:16:10 -0400 Message-Id: <20250325121624.523258-30-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" In RV64ILP32 ABI systems, BITS_PER_LONG equals 32 and determines code selection, not CONFIG_64BIT. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/atomic/atomic-long.h | 174 ++++++++++++++--------------- scripts/atomic/gen-atomic-long.sh | 4 +- 2 files changed, 89 insertions(+), 89 deletions(-) diff --git a/include/linux/atomic/atomic-long.h b/include/linux/atomic/atomic-long.h index f86b29d90877..e31e0bdf9e26 100644 --- a/include/linux/atomic/atomic-long.h +++ b/include/linux/atomic/atomic-long.h @@ -9,7 +9,7 @@ #include #include -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 typedef atomic64_t atomic_long_t; #define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) #define atomic_long_cond_read_acquire atomic64_cond_read_acquire @@ -34,7 +34,7 @@ typedef atomic_t atomic_long_t; static __always_inline long raw_atomic_long_read(const atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_read(v); #else return raw_atomic_read(v); @@ -54,7 +54,7 @@ raw_atomic_long_read(const atomic_long_t *v) static __always_inline long raw_atomic_long_read_acquire(const atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_read_acquire(v); #else return raw_atomic_read_acquire(v); @@ -75,7 +75,7 @@ raw_atomic_long_read_acquire(const atomic_long_t *v) static __always_inline void raw_atomic_long_set(atomic_long_t *v, long i) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_set(v, i); #else raw_atomic_set(v, i); @@ -96,7 +96,7 @@ raw_atomic_long_set(atomic_long_t *v, long i) static __always_inline void raw_atomic_long_set_release(atomic_long_t *v, long i) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_set_release(v, i); #else raw_atomic_set_release(v, i); @@ -117,7 +117,7 @@ raw_atomic_long_set_release(atomic_long_t *v, long i) static __always_inline void raw_atomic_long_add(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_add(i, v); #else raw_atomic_add(i, v); @@ -138,7 +138,7 @@ raw_atomic_long_add(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return(i, v); #else return raw_atomic_add_return(i, v); @@ -159,7 +159,7 @@ raw_atomic_long_add_return(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return_acquire(i, v); #else return raw_atomic_add_return_acquire(i, v); @@ -180,7 +180,7 @@ raw_atomic_long_add_return_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return_release(i, v); #else return raw_atomic_add_return_release(i, v); @@ -201,7 +201,7 @@ raw_atomic_long_add_return_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_return_relaxed(i, v); #else return raw_atomic_add_return_relaxed(i, v); @@ -222,7 +222,7 @@ raw_atomic_long_add_return_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add(i, v); #else return raw_atomic_fetch_add(i, v); @@ -243,7 +243,7 @@ raw_atomic_long_fetch_add(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_acquire(i, v); #else return raw_atomic_fetch_add_acquire(i, v); @@ -264,7 +264,7 @@ raw_atomic_long_fetch_add_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_release(i, v); #else return raw_atomic_fetch_add_release(i, v); @@ -285,7 +285,7 @@ raw_atomic_long_fetch_add_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_relaxed(i, v); #else return raw_atomic_fetch_add_relaxed(i, v); @@ -306,7 +306,7 @@ raw_atomic_long_fetch_add_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_sub(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_sub(i, v); #else raw_atomic_sub(i, v); @@ -327,7 +327,7 @@ raw_atomic_long_sub(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return(i, v); #else return raw_atomic_sub_return(i, v); @@ -348,7 +348,7 @@ raw_atomic_long_sub_return(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return_acquire(i, v); #else return raw_atomic_sub_return_acquire(i, v); @@ -369,7 +369,7 @@ raw_atomic_long_sub_return_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return_release(i, v); #else return raw_atomic_sub_return_release(i, v); @@ -390,7 +390,7 @@ raw_atomic_long_sub_return_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_return_relaxed(i, v); #else return raw_atomic_sub_return_relaxed(i, v); @@ -411,7 +411,7 @@ raw_atomic_long_sub_return_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub(i, v); #else return raw_atomic_fetch_sub(i, v); @@ -432,7 +432,7 @@ raw_atomic_long_fetch_sub(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub_acquire(i, v); #else return raw_atomic_fetch_sub_acquire(i, v); @@ -453,7 +453,7 @@ raw_atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub_release(i, v); #else return raw_atomic_fetch_sub_release(i, v); @@ -474,7 +474,7 @@ raw_atomic_long_fetch_sub_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_sub_relaxed(i, v); #else return raw_atomic_fetch_sub_relaxed(i, v); @@ -494,7 +494,7 @@ raw_atomic_long_fetch_sub_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_inc(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_inc(v); #else raw_atomic_inc(v); @@ -514,7 +514,7 @@ raw_atomic_long_inc(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return(v); #else return raw_atomic_inc_return(v); @@ -534,7 +534,7 @@ raw_atomic_long_inc_return(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return_acquire(v); #else return raw_atomic_inc_return_acquire(v); @@ -554,7 +554,7 @@ raw_atomic_long_inc_return_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return_release(v); #else return raw_atomic_inc_return_release(v); @@ -574,7 +574,7 @@ raw_atomic_long_inc_return_release(atomic_long_t *v) static __always_inline long raw_atomic_long_inc_return_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_return_relaxed(v); #else return raw_atomic_inc_return_relaxed(v); @@ -594,7 +594,7 @@ raw_atomic_long_inc_return_relaxed(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc(v); #else return raw_atomic_fetch_inc(v); @@ -614,7 +614,7 @@ raw_atomic_long_fetch_inc(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc_acquire(v); #else return raw_atomic_fetch_inc_acquire(v); @@ -634,7 +634,7 @@ raw_atomic_long_fetch_inc_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc_release(v); #else return raw_atomic_fetch_inc_release(v); @@ -654,7 +654,7 @@ raw_atomic_long_fetch_inc_release(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_inc_relaxed(v); #else return raw_atomic_fetch_inc_relaxed(v); @@ -674,7 +674,7 @@ raw_atomic_long_fetch_inc_relaxed(atomic_long_t *v) static __always_inline void raw_atomic_long_dec(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_dec(v); #else raw_atomic_dec(v); @@ -694,7 +694,7 @@ raw_atomic_long_dec(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return(v); #else return raw_atomic_dec_return(v); @@ -714,7 +714,7 @@ raw_atomic_long_dec_return(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return_acquire(v); #else return raw_atomic_dec_return_acquire(v); @@ -734,7 +734,7 @@ raw_atomic_long_dec_return_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return_release(v); #else return raw_atomic_dec_return_release(v); @@ -754,7 +754,7 @@ raw_atomic_long_dec_return_release(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_return_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_return_relaxed(v); #else return raw_atomic_dec_return_relaxed(v); @@ -774,7 +774,7 @@ raw_atomic_long_dec_return_relaxed(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec(v); #else return raw_atomic_fetch_dec(v); @@ -794,7 +794,7 @@ raw_atomic_long_fetch_dec(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec_acquire(v); #else return raw_atomic_fetch_dec_acquire(v); @@ -814,7 +814,7 @@ raw_atomic_long_fetch_dec_acquire(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec_release(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec_release(v); #else return raw_atomic_fetch_dec_release(v); @@ -834,7 +834,7 @@ raw_atomic_long_fetch_dec_release(atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_dec_relaxed(v); #else return raw_atomic_fetch_dec_relaxed(v); @@ -855,7 +855,7 @@ raw_atomic_long_fetch_dec_relaxed(atomic_long_t *v) static __always_inline void raw_atomic_long_and(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_and(i, v); #else raw_atomic_and(i, v); @@ -876,7 +876,7 @@ raw_atomic_long_and(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and(i, v); #else return raw_atomic_fetch_and(i, v); @@ -897,7 +897,7 @@ raw_atomic_long_fetch_and(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and_acquire(i, v); #else return raw_atomic_fetch_and_acquire(i, v); @@ -918,7 +918,7 @@ raw_atomic_long_fetch_and_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and_release(i, v); #else return raw_atomic_fetch_and_release(i, v); @@ -939,7 +939,7 @@ raw_atomic_long_fetch_and_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_and_relaxed(i, v); #else return raw_atomic_fetch_and_relaxed(i, v); @@ -960,7 +960,7 @@ raw_atomic_long_fetch_and_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_andnot(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_andnot(i, v); #else raw_atomic_andnot(i, v); @@ -981,7 +981,7 @@ raw_atomic_long_andnot(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot(i, v); #else return raw_atomic_fetch_andnot(i, v); @@ -1002,7 +1002,7 @@ raw_atomic_long_fetch_andnot(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot_acquire(i, v); #else return raw_atomic_fetch_andnot_acquire(i, v); @@ -1023,7 +1023,7 @@ raw_atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot_release(i, v); #else return raw_atomic_fetch_andnot_release(i, v); @@ -1044,7 +1044,7 @@ raw_atomic_long_fetch_andnot_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_andnot_relaxed(i, v); #else return raw_atomic_fetch_andnot_relaxed(i, v); @@ -1065,7 +1065,7 @@ raw_atomic_long_fetch_andnot_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_or(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_or(i, v); #else raw_atomic_or(i, v); @@ -1086,7 +1086,7 @@ raw_atomic_long_or(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or(i, v); #else return raw_atomic_fetch_or(i, v); @@ -1107,7 +1107,7 @@ raw_atomic_long_fetch_or(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or_acquire(i, v); #else return raw_atomic_fetch_or_acquire(i, v); @@ -1128,7 +1128,7 @@ raw_atomic_long_fetch_or_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or_release(i, v); #else return raw_atomic_fetch_or_release(i, v); @@ -1149,7 +1149,7 @@ raw_atomic_long_fetch_or_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_or_relaxed(i, v); #else return raw_atomic_fetch_or_relaxed(i, v); @@ -1170,7 +1170,7 @@ raw_atomic_long_fetch_or_relaxed(long i, atomic_long_t *v) static __always_inline void raw_atomic_long_xor(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 raw_atomic64_xor(i, v); #else raw_atomic_xor(i, v); @@ -1191,7 +1191,7 @@ raw_atomic_long_xor(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor(i, v); #else return raw_atomic_fetch_xor(i, v); @@ -1212,7 +1212,7 @@ raw_atomic_long_fetch_xor(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor_acquire(i, v); #else return raw_atomic_fetch_xor_acquire(i, v); @@ -1233,7 +1233,7 @@ raw_atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor_release(i, v); #else return raw_atomic_fetch_xor_release(i, v); @@ -1254,7 +1254,7 @@ raw_atomic_long_fetch_xor_release(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_xor_relaxed(i, v); #else return raw_atomic_fetch_xor_relaxed(i, v); @@ -1275,7 +1275,7 @@ raw_atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_xchg(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg(v, new); #else return raw_atomic_xchg(v, new); @@ -1296,7 +1296,7 @@ raw_atomic_long_xchg(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg_acquire(v, new); #else return raw_atomic_xchg_acquire(v, new); @@ -1317,7 +1317,7 @@ raw_atomic_long_xchg_acquire(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_xchg_release(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg_release(v, new); #else return raw_atomic_xchg_release(v, new); @@ -1338,7 +1338,7 @@ raw_atomic_long_xchg_release(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_xchg_relaxed(v, new); #else return raw_atomic_xchg_relaxed(v, new); @@ -1361,7 +1361,7 @@ raw_atomic_long_xchg_relaxed(atomic_long_t *v, long new) static __always_inline long raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg(v, old, new); #else return raw_atomic_cmpxchg(v, old, new); @@ -1384,7 +1384,7 @@ raw_atomic_long_cmpxchg(atomic_long_t *v, long old, long new) static __always_inline long raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg_acquire(v, old, new); #else return raw_atomic_cmpxchg_acquire(v, old, new); @@ -1407,7 +1407,7 @@ raw_atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) static __always_inline long raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg_release(v, old, new); #else return raw_atomic_cmpxchg_release(v, old, new); @@ -1430,7 +1430,7 @@ raw_atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) static __always_inline long raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_cmpxchg_relaxed(v, old, new); #else return raw_atomic_cmpxchg_relaxed(v, old, new); @@ -1454,7 +1454,7 @@ raw_atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg(v, (int *)old, new); @@ -1478,7 +1478,7 @@ raw_atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg_acquire(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg_acquire(v, (int *)old, new); @@ -1502,7 +1502,7 @@ raw_atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg_release(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg_release(v, (int *)old, new); @@ -1526,7 +1526,7 @@ raw_atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_try_cmpxchg_relaxed(v, (s64 *)old, new); #else return raw_atomic_try_cmpxchg_relaxed(v, (int *)old, new); @@ -1547,7 +1547,7 @@ raw_atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) static __always_inline bool raw_atomic_long_sub_and_test(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_sub_and_test(i, v); #else return raw_atomic_sub_and_test(i, v); @@ -1567,7 +1567,7 @@ raw_atomic_long_sub_and_test(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_dec_and_test(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_and_test(v); #else return raw_atomic_dec_and_test(v); @@ -1587,7 +1587,7 @@ raw_atomic_long_dec_and_test(atomic_long_t *v) static __always_inline bool raw_atomic_long_inc_and_test(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_and_test(v); #else return raw_atomic_inc_and_test(v); @@ -1608,7 +1608,7 @@ raw_atomic_long_inc_and_test(atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative(i, v); #else return raw_atomic_add_negative(i, v); @@ -1629,7 +1629,7 @@ raw_atomic_long_add_negative(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative_acquire(i, v); #else return raw_atomic_add_negative_acquire(i, v); @@ -1650,7 +1650,7 @@ raw_atomic_long_add_negative_acquire(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative_release(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative_release(i, v); #else return raw_atomic_add_negative_release(i, v); @@ -1671,7 +1671,7 @@ raw_atomic_long_add_negative_release(long i, atomic_long_t *v) static __always_inline bool raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_negative_relaxed(i, v); #else return raw_atomic_add_negative_relaxed(i, v); @@ -1694,7 +1694,7 @@ raw_atomic_long_add_negative_relaxed(long i, atomic_long_t *v) static __always_inline long raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_fetch_add_unless(v, a, u); #else return raw_atomic_fetch_add_unless(v, a, u); @@ -1717,7 +1717,7 @@ raw_atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_add_unless(v, a, u); #else return raw_atomic_add_unless(v, a, u); @@ -1738,7 +1738,7 @@ raw_atomic_long_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool raw_atomic_long_inc_not_zero(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_not_zero(v); #else return raw_atomic_inc_not_zero(v); @@ -1759,7 +1759,7 @@ raw_atomic_long_inc_not_zero(atomic_long_t *v) static __always_inline bool raw_atomic_long_inc_unless_negative(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_inc_unless_negative(v); #else return raw_atomic_inc_unless_negative(v); @@ -1780,7 +1780,7 @@ raw_atomic_long_inc_unless_negative(atomic_long_t *v) static __always_inline bool raw_atomic_long_dec_unless_positive(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_unless_positive(v); #else return raw_atomic_dec_unless_positive(v); @@ -1801,7 +1801,7 @@ raw_atomic_long_dec_unless_positive(atomic_long_t *v) static __always_inline long raw_atomic_long_dec_if_positive(atomic_long_t *v) { -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 return raw_atomic64_dec_if_positive(v); #else return raw_atomic_dec_if_positive(v); @@ -1809,4 +1809,4 @@ raw_atomic_long_dec_if_positive(atomic_long_t *v) } #endif /* _LINUX_ATOMIC_LONG_H */ -// eadf183c3600b8b92b91839dd3be6bcc560c752d +// 1b27315f1248fc8d43401372db7dd5895889c5be diff --git a/scripts/atomic/gen-atomic-long.sh b/scripts/atomic/gen-atomic-long.sh index 9826be3ba986..7667305381fc 100755 --- a/scripts/atomic/gen-atomic-long.sh +++ b/scripts/atomic/gen-atomic-long.sh @@ -55,7 +55,7 @@ cat < #include -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 typedef atomic64_t atomic_long_t; #define ATOMIC_LONG_INIT(i) ATOMIC64_INIT(i) #define atomic_long_cond_read_acquire atomic64_cond_read_acquire From patchwork Tue Mar 25 12:16:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876222 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6860325DD1A; Tue, 25 Mar 2025 12:24:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905450; cv=none; b=e2FVTDJJLdbcPhTPz6p3fz8q+Moqbv2pAC5chieJSExnXqShApflZyYuKShUMjYZvb5piqrJbp2Wf1nUCOEZzjYMTTNurbOvjnwyZKmoGMD2/kQCtX34MdY8tx3X7RwixRhkWc0AkT53/JfVEXbKESAIIzlLmyO/vugbzld5JYs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905450; c=relaxed/simple; bh=YlFb6KkzVSGSuegSI2MJMuDKnOb+jmQgRm/aIyv4ASM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=o6cvTxyHqJDtHSpgpjFNjfiIFW/AmxaRmlS82d96BDgutnbUiINb+8zlBCMGdtqT1wqrfN/fRURAgkZ6tyFIu/AjEJzrQXZsSGWPyVfLN4WfMq9aImmi6AppSmhvqkOHSfGP9Fj2hQKugHK9CCy7OM8+oaMb8FPXgx6nMqIjD4w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sWpvaUIx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sWpvaUIx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33EE3C4CEE9; Tue, 25 Mar 2025 12:23:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905449; bh=YlFb6KkzVSGSuegSI2MJMuDKnOb+jmQgRm/aIyv4ASM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sWpvaUIxJ2xmK0FQF0ljHhlr/YashdPyYsTn7al4PxnIvUdy5/V+e9JCSqYjPiQ5E qVMLggXWg+4qFNOeniAE53fLZ1Prkl0/M6GeEuOJ+uT+3t3q7fV6UUS2UIgffUSeso wRSxhB5nlUaDz5ji9smEUpkoHf9ykZfP/NySchrJUTDEg5MhkdVVvAnNX4EACaRX4T pdKSFNThVJ1sR53X8HbMYhyVDRiX7Q6sFsJkcrdiRsi7pFRqIxw20GaRxB8ae45rL7 7llPuKtxuxdA0NoRclxRUGq3TI+0zr3JNpZA7Af2b2pgVGyaoWF87wcU8ebw0LTt3d JLIfeJGTL7Z9g== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 31/43] rv64ilp32_abi: maple_tree: Use BITS_PER_LONG instead of CONFIG_64BIT Date: Tue, 25 Mar 2025 08:16:12 -0400 Message-Id: <20250325121624.523258-32-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The Maple tree algorithm uses ulong type for each element. The number of slots is based on BITS_PER_LONG for RV64ILP32 ABI, so use BITS_PER_LONG instead of CONFIG_64BIT. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/maple_tree.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/maple_tree.h b/include/linux/maple_tree.h index cbbcd18d4186..ff6265b6468b 100644 --- a/include/linux/maple_tree.h +++ b/include/linux/maple_tree.h @@ -24,7 +24,7 @@ * * Nodes in the tree point to their parent unless bit 0 is set. */ -#if defined(CONFIG_64BIT) || defined(BUILD_VDSO32_64) +#if (BITS_PER_LONG == 64) || defined(BUILD_VDSO32_64) /* 64bit sizes */ #define MAPLE_NODE_SLOTS 31 /* 256 bytes including ->parent */ #define MAPLE_RANGE64_SLOTS 16 /* 256 bytes */ From patchwork Tue Mar 25 12:16:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876221 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF0F72571D1; Tue, 25 Mar 2025 12:24:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905480; cv=none; b=dwjwB+P9TxzatwpxIBvFq4OCDzWmebzS7P3Yj0b55SsW4aPs0agXPmmNH/G/iVTaUcdlvR+K8Myz7Kxv2sRNgOt1gQ7ypNvCcYO3xzxZO38YkFCSP1Jv2yuWonK/wtjla0K3f502sjAtB+RRGnEZmwBrvbWu8XHjfg+qghoypso= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905480; c=relaxed/simple; bh=+rpm/N8/UHXP5n3hRK8c2vpAMzIx7Ul4qNvFlNw8mss=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=HagG87bgnClocGxdlKM7dnFgKl7Qpnr1tQhnoUvR3jGCvirCyt742Glw2jC04Ne8sHJr45OmGN5tjDcaYndfDcJ3cAut/RYHgM0LTG8V9DSjN93A5bbACva01tIgw1a9b6Q5bWClFcbKxIEVD71EfbT1S2LGLg+0B5fLWZ+LlAE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fJ61zsFN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fJ61zsFN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2D02C4CEED; Tue, 25 Mar 2025 12:24:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905479; bh=+rpm/N8/UHXP5n3hRK8c2vpAMzIx7Ul4qNvFlNw8mss=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fJ61zsFNPv0tVZchhiONptI9S+phpKdkaxrIZlJXfqP8ZFhX5npdPIZKutJs7jnqd cCfXzR1Lhw26JF6STGdUIKZRIQjVQ3FNgrDHdOvq/LtsFHWE3zxFE/GTBVb31ANpi6 beEwlo8W1sTeTn0cFAZJSTH8Xx2n6hy1MZKcm5ZKAENN6VnIKjbEvFrXx/UwseWpbr aofW9gTlCfs1lWzyuR3LRO5UKXtkNqYmP8M+/YlMc+6Le8QBFSTCqH374JFrd1SAX6 1sinYuNojS4RMnq31N3OYBaEEMx5tdvpoFDLmiGmnn4c78h2enMsQS6DDCJxF83lgY sgAsBfhlY0xYg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 33/43] rv64ilp32_abi: mm/auxvec: Adapt mm->saved_auxv[] to Elf64 Date: Tue, 25 Mar 2025 08:16:14 -0400 Message-Id: <20250325121624.523258-34-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" Unable to handle kernel paging request at virtual address 60723de0 Oops [#1] Modules linked in: CPU: 0 UID: 0 PID: 1 Comm: init Not tainted 6.13.0-rc4-00031-g01dc3ca797b3-dirty #161 Hardware name: riscv-virtio,qemu (DT) epc : percpu_counter_add_batch+0x38/0xc4 ra : filemap_map_pages+0x3ec/0x54c epc : ffffffffbc4ea02e ra : ffffffffbc1722e4 sp : ffffffffc1c4fc60 gp : ffffffffbd6d3918 tp : ffffffffc1c50000 t0 : 0000000000000000 t1 : 000000003fffefff t2 : 0000000000000000 s0 : ffffffffc1c4fca0 s1 : 0000000000000022 a0 : ffffffffc25c8250 a1 : 0000000000000003 a2 : 0000000000000020 a3 : 000000003fffefff a4 : 000000000b1c2000 a5 : 0000000060723de0 a6 : ffffffffbffff000 a7 : 000000003fffffff s2 : ffffffffc25c8250 s3 : ffffffffc246e240 s4 : ffffffffc2138240 s5 : ffffffffbd70c4d0 s6 : 0000000000000003 s7 : 0000000000000000 s8 : ffffffff9a02d780 s9 : 0000000000000100 s10: ffffffffc1c4fda8 s11: 0000000000000003 t3 : 0000000000000000 t4 : 00000000000004f7 t5 : 0000000000000000 t6 : 0000000000000001 status: 0000000200000100 badaddr: 0000000060723de0 cause: 000000000000000d [] percpu_counter_add_batch+0x38/0xc4 [] filemap_map_pages+0x3ec/0x54c [] handle_mm_fault+0xb6c/0xe9c [] handle_page_fault+0xd0/0x418 [] do_page_fault+0x20/0x3a [] _new_vmalloc_restore_context_a0+0xb0/0xbc Code: 8a93 4baa 511c 171b 0027 873b 00ea 4318 2481 9fb9 (aa03) 0007 Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/mm_types.h | 4 ++++ kernel/sys.c | 8 ++++++++ 2 files changed, 12 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index da3ba1a79ad5..0d436b0217fd 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -962,7 +962,11 @@ struct mm_struct { unsigned long start_brk, brk, start_stack; unsigned long arg_start, arg_end, env_start, env_end; +#ifdef CONFIG_64BIT + unsigned long long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */ +#else unsigned long saved_auxv[AT_VECTOR_SIZE]; /* for /proc/PID/auxv */ +#endif struct percpu_counter rss_stat[NR_MM_COUNTERS]; diff --git a/kernel/sys.c b/kernel/sys.c index cb366ff8703a..81c0d94ff50d 100644 --- a/kernel/sys.c +++ b/kernel/sys.c @@ -2008,7 +2008,11 @@ static int validate_prctl_map_addr(struct prctl_mm_map *prctl_map) static int prctl_set_mm_map(int opt, const void __user *addr, unsigned long data_size) { struct prctl_mm_map prctl_map = { .exe_fd = (u32)-1, }; +#ifdef CONFIG_64BIT + unsigned long long user_auxv[AT_VECTOR_SIZE]; +#else unsigned long user_auxv[AT_VECTOR_SIZE]; +#endif struct mm_struct *mm = current->mm; int error; @@ -2122,7 +2126,11 @@ static int prctl_set_auxv(struct mm_struct *mm, unsigned long addr, * up to the caller to provide sane values here, otherwise userspace * tools which use this vector might be unhappy. */ +#ifdef CONFIG_64BIT + unsigned long long user_auxv[AT_VECTOR_SIZE] = {}; +#else unsigned long user_auxv[AT_VECTOR_SIZE] = {}; +#endif if (len > sizeof(user_auxv)) return -EINVAL; From patchwork Tue Mar 25 12:16:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876220 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7831A262811; Tue, 25 Mar 2025 12:25:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905510; cv=none; b=MiChJk0XGEOXbNyHKx6HnnRsIiSg0ltdTVAi0vWwxIvSu/4YrpWJOLDGd6P1OKrwdr4khTZRlvhhf5Td0P2+D+otTQrfzZK9NeGTvysZ20oTOrMJK2Ql6JEvgKW1/4ldouW2YrGzwu65eXFQU5Ptc/cwCEuXFZPXCuYlInBOJrE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905510; c=relaxed/simple; bh=uqoJTO8TQ6On8+9R1MYgJxGk7xYpfwdXChiuqug5XKY=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=C/CdgXTGN/LQteLk88R6CO/9NNzr4GFN2m571anakaMHYn0zJPudLTOhrluGCAHlWivgPDKXK5Rb+hs4CIE1tUHNnanKV+GK3lFXl84Thi7oX+YbsIkHx5BweXZkNGZldZ5uvtZu89q+tWT41/HW53byTfRAt56NoTIZDbtxgGw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Tbm8Xu/x; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Tbm8Xu/x" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2981CC4CEE4; Tue, 25 Mar 2025 12:24:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905510; bh=uqoJTO8TQ6On8+9R1MYgJxGk7xYpfwdXChiuqug5XKY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tbm8Xu/xL9LwAkup8nr/CZGwgZ/HtBocjjMNivSDeL5hMNIibkzNWBIjcdug8EvfD QidFgTYi+HMsMkeR8+VRTweKTrWgmou5CzsndMXzlO4oZEivwlhPhDjj4fjOc56Qc9 LEKFd8XS0hLN2de0NvYOj9MFS3ZC1n3uwR87P/ChdO+Qxq566Apd5rBSyR6mxaQqhR uYwPY7KfQYrAJZbgGx2D1hYimJ9ECj2lRGYUD9xqOKdph3HacGa/jE9YJiGcpZdWWY sfTkSdpa1Iac6eeiW6fFY3790kvmIaOHme0uYaLwkVlbwify8gYY2fjL3qNT/qkevh HrFm01xwFxqLQ== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 35/43] rv64ilp32_abi: net: Use BITS_PER_LONG in struct dst_entry Date: Tue, 25 Mar 2025 08:16:16 -0400 Message-Id: <20250325121624.523258-36-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The rv64ilp32 ABI depends on CONFIG_64BIT for its ILP32 data type, which is smaller. To align with ILP32 requirements, CONFIG_64BIT was changed to BITS_PER_LONG in struct dts_entry. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/net/dst.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/include/net/dst.h b/include/net/dst.h index 78c78cdce0e9..af1c74c4836e 100644 --- a/include/net/dst.h +++ b/include/net/dst.h @@ -65,7 +65,7 @@ struct dst_entry { * __rcuref wants to be on a different cache line from * input/output/ops or performance tanks badly */ -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 rcuref_t __rcuref; /* 64-bit offset 64 */ #endif int __use; @@ -74,7 +74,7 @@ struct dst_entry { short error; short __pad; __u32 tclassid; -#ifndef CONFIG_64BIT +#if BITS_PER_LONG == 32 struct lwtunnel_state *lwtstate; rcuref_t __rcuref; /* 32-bit offset 64 */ #endif @@ -89,7 +89,7 @@ struct dst_entry { */ struct list_head rt_uncached; struct uncached_list *rt_uncached_list; -#ifdef CONFIG_64BIT +#if BITS_PER_LONG == 64 struct lwtunnel_state *lwtstate; #endif }; From patchwork Tue Mar 25 12:16:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876219 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4967C257AC7; Tue, 25 Mar 2025 12:25:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905540; cv=none; b=LiyN/KAaJpya6Z0CXQ849vt7qR6BVNhGS6Q/2LwQTa/Eg806o++jEf/XTiK29/t6+PGsZ5thRK9r2uC/J8909lglcKZIGub5zhycygKwQiuo7NZWmNYg1MlK149JDjhLIdqLWuS8nHgInzpihxN6AuhZh1V9kXBJnFpGCw3uGc0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905540; c=relaxed/simple; bh=sm654hPYjWJMXJD/nEEXxI+s+R28mm6voQq4ZrjJStU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=fMe7PiSXydF+rGbYnyvinNjlvNVjYHybAfRu9q2marLG7xZlRm+xWmmyndQq06NRj2pGVv0cUdwOH0+Rj8FzzU1A8zaNLkSQPeIfJXc6rzyfJ2aC/yOnhWGiRvEm+GPFqH6a22XH4b4Qz3KqOcGnUYaoFHKrtdvH3jGod7EIyRc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Kb/eV2Rw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Kb/eV2Rw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2764C4CEE4; Tue, 25 Mar 2025 12:25:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905540; bh=sm654hPYjWJMXJD/nEEXxI+s+R28mm6voQq4ZrjJStU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Kb/eV2Rwu3zYNU7Jt3dNKcNXYL8nMRHt5pyAJqK6Q/FHeFvW6XiemJEA3uvEk7fVs 3EdHOm9rkT05EzHBa5buxdZyLrstfBsowWufM7HqrOlG+CIUjr96jk/fDQXjnFPt+R dWfnumlqW9XRN8RlSa4I28acYRg721cxluLl8IEARR3ZQbcXBk3NZPefJ2WwhFLcUe OiQ0cG9dLZ0sgksiS+3kI0Gf4fzOMT9Xk2MfF9taYcuO0Y49REu0hLiOZAG6x8XWZ3 isLS319gnMj4Vs30saFH4Hmz96Q2P/kZ1dTH5mgBIdyz3rnr7TCbzfvoPIkyCS5b43 0M+WNWc7E/U2Q== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 37/43] rv64ilp32_abi: random: Adapt fast_pool struct Date: Tue, 25 Mar 2025 08:16:18 -0400 Message-Id: <20250325121624.523258-38-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI systems have BITS_PER_LONG set to 32, matching sizeof(compat_ulong_t). Adjust code Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/char/random.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 2581186fa61b..0bfbe02ee255 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -1015,7 +1015,11 @@ EXPORT_SYMBOL_GPL(unregister_random_vmfork_notifier); #endif struct fast_pool { +#ifdef CONFIG_64BIT + u64 pool[4]; +#else unsigned long pool[4]; +#endif unsigned long last; unsigned int count; struct timer_list mix; @@ -1040,7 +1044,11 @@ static DEFINE_PER_CPU(struct fast_pool, irq_randomness) = { * and therefore this has no security on its own. s represents the * four-word SipHash state, while v represents a two-word input. */ +#ifdef CONFIG_64BIT +static void fast_mix(u64 s[4], u64 v1, u64 v2) +#else static void fast_mix(unsigned long s[4], unsigned long v1, unsigned long v2) +#endif { s[3] ^= v1; FASTMIX_PERM(s[0], s[1], s[2], s[3]); From patchwork Tue Mar 25 12:16:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876218 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CC9CA25A2A0; Tue, 25 Mar 2025 12:26:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905573; cv=none; b=PbOYDN4UXO5UFlr88bNL+emUT1hOyjG2X17rsiXTjLxqercHJ9fOWyXrlwHOUVB5IaUBzLH19Sm0b4nspzwniNko3aBGs3McRQQQfYuGfuWWxZWsqwTRzyz15N+n1DLFFJrzeefcLjIGtZYRFArP64b7fWxcvGy3CZVfa/J3jjk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905573; c=relaxed/simple; bh=yrVg1aQo8nvzx1orbzCiGBpDINXbK2p/bzc74O/ZG7Y=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=KpxTMCcj04DsTcesaA056UJfqKRw3AVdLwk6VTdznUeWKDzb3c5l8o1/fMONEj9Hs6NZuwbQoXbK2769ZgmmNhhg20cpi5pflCH5VrZvnnXOHDqy/eew8+Px0XPctfXQUB6lnH1if/zWcj8F1Wd1zcgKL2JOBWLiUNnAf8RjpHw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=dXHVR37B; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="dXHVR37B" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CA822C4CEE9; Tue, 25 Mar 2025 12:25:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905572; bh=yrVg1aQo8nvzx1orbzCiGBpDINXbK2p/bzc74O/ZG7Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dXHVR37BYjtnP226wRmqGD/+5q9LGrYaanpzXXOav8r/BHJV5tQivSCfdRSIzthse 8trpo5JD1iz/5OecDTcnupCBJBQX33vaw/hAbD4qk4LeJVqvqHHpw9NHBT1ZFlMkn0 uCFXCw3w+yxAMYAhQHMWBx2FPCBEyeY1A19l2GvG5jVXNeDCNpsZ+dZy8FBqDa7uo/ Zsma4xcYZbwnzagW4sOr51uBOwZp+LjobLSp/6HGruAVKHWCA8RItDHKa1cnmepcuH KTSRJuNlahXURYcKWJr2ZhoKXLWH1u/JzJrtINfTgSwDQFMRVa5eB4TERwwIW5IXtr Utc6YiuHqvQsA== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 39/43] rv64ilp32_abi: sysinfo: Adapt sysinfo structure to lp64 uapi Date: Tue, 25 Mar 2025 08:16:20 -0400 Message-Id: <20250325121624.523258-40-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RISC-V 64ilp32 ABI leverages LP64 uapi and accommodates LP64 ABI userspace directly, necessitating updates to the sysinfo struct's unsigned long and array types with u64. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- fs/proc/loadavg.c | 10 +++++++--- include/linux/sched/loadavg.h | 4 ++++ include/uapi/linux/sysinfo.h | 20 ++++++++++++++++++++ kernel/sched/loadavg.c | 4 ++++ 4 files changed, 35 insertions(+), 3 deletions(-) diff --git a/fs/proc/loadavg.c b/fs/proc/loadavg.c index 817981e57223..643e06de3446 100644 --- a/fs/proc/loadavg.c +++ b/fs/proc/loadavg.c @@ -13,14 +13,18 @@ static int loadavg_proc_show(struct seq_file *m, void *v) { +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) + unsigned long long avnrun[3]; +#else unsigned long avnrun[3]; +#endif get_avenrun(avnrun, FIXED_1/200, 0); seq_printf(m, "%lu.%02lu %lu.%02lu %lu.%02lu %u/%d %d\n", - LOAD_INT(avnrun[0]), LOAD_FRAC(avnrun[0]), - LOAD_INT(avnrun[1]), LOAD_FRAC(avnrun[1]), - LOAD_INT(avnrun[2]), LOAD_FRAC(avnrun[2]), + LOAD_INT((ulong)avnrun[0]), LOAD_FRAC((ulong)avnrun[0]), + LOAD_INT((ulong)avnrun[1]), LOAD_FRAC((ulong)avnrun[1]), + LOAD_INT((ulong)avnrun[2]), LOAD_FRAC((ulong)avnrun[2]), nr_running(), nr_threads, idr_get_cursor(&task_active_pid_ns(current)->idr) - 1); return 0; diff --git a/include/linux/sched/loadavg.h b/include/linux/sched/loadavg.h index 83ec54b65e79..8f2d6a827ee9 100644 --- a/include/linux/sched/loadavg.h +++ b/include/linux/sched/loadavg.h @@ -13,7 +13,11 @@ * 11 bit fractions. */ extern unsigned long avenrun[]; /* Load averages */ +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +extern void get_avenrun(unsigned long long *loads, unsigned long offset, int shift); +#else extern void get_avenrun(unsigned long *loads, unsigned long offset, int shift); +#endif #define FSHIFT 11 /* nr of bits of precision */ #define FIXED_1 (1< #define SI_LOAD_SHIFT 16 + +#if (__riscv_xlen == 64) && (__BITS_PER_LONG == 32) +struct sysinfo { + __s64 uptime; /* Seconds since boot */ + __u64 loads[3]; /* 1, 5, and 15 minute load averages */ + __u64 totalram; /* Total usable main memory size */ + __u64 freeram; /* Available memory size */ + __u64 sharedram; /* Amount of shared memory */ + __u64 bufferram; /* Memory used by buffers */ + __u64 totalswap; /* Total swap space size */ + __u64 freeswap; /* swap space still available */ + __u16 procs; /* Number of current processes */ + __u16 pad; /* Explicit padding for m68k */ + __u64 totalhigh; /* Total high memory size */ + __u64 freehigh; /* Available high memory size */ + __u32 mem_unit; /* Memory unit size in bytes */ + char _f[20-2*sizeof(__u64)-sizeof(__u32)]; /* Padding: libc5 uses this.. */ +}; +#else struct sysinfo { __kernel_long_t uptime; /* Seconds since boot */ __kernel_ulong_t loads[3]; /* 1, 5, and 15 minute load averages */ @@ -21,5 +40,6 @@ struct sysinfo { __u32 mem_unit; /* Memory unit size in bytes */ char _f[20-2*sizeof(__kernel_ulong_t)-sizeof(__u32)]; /* Padding: libc5 uses this.. */ }; +#endif #endif /* _LINUX_SYSINFO_H */ diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c index c48900b856a2..f1f5abc64dea 100644 --- a/kernel/sched/loadavg.c +++ b/kernel/sched/loadavg.c @@ -68,7 +68,11 @@ EXPORT_SYMBOL(avenrun); /* should be removed */ * * These values are estimates at best, so no need for locking. */ +#if defined(CONFIG_64BIT) && (BITS_PER_LONG == 32) +void get_avenrun(unsigned long long *loads, unsigned long offset, int shift) +#else void get_avenrun(unsigned long *loads, unsigned long offset, int shift) +#endif { loads[0] = (avenrun[0] + offset) << shift; loads[1] = (avenrun[1] + offset) << shift; From patchwork Tue Mar 25 12:16:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876217 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6617525745C; Tue, 25 Mar 2025 12:26:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905603; cv=none; b=i6TphsgKWTOCs4hkVTuMbuDuScl7kHolxQtdVmIdmrBzZWwvZJLabWc6xFYf73EtTXz2wBi2Opr1FtwfkKNuZfyyHRRfODRBjSnL7djM0XsWaSJ6Is4lkQEZs3b7B+dkBLfaz3rsKcRs63C0t8VY2veobCcQ8uRaDutcNL//4fo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905603; c=relaxed/simple; bh=gGEDexkonBv9xkqlklXH6H0siev5It/WclAJgWuLAaE=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ZJz+SqQonA4Emf+lRFy347EqEP/b9jer/HUpWI1fKDSN0pLBOq8MqIQb5VfDcZvsQu+jDyqTzlbhwqs6VdZv08yt984u0y0qRM/aucA4w/ov+1St/a8jG5T2xjdmBg4etTkZTKpIDd86t8mh8SVQuwhq71WiwYDDoVND31FAB4s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ihVlwKbl; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ihVlwKbl" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58230C4CEE4; Tue, 25 Mar 2025 12:26:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905600; bh=gGEDexkonBv9xkqlklXH6H0siev5It/WclAJgWuLAaE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ihVlwKblKsuWFNcMtDJ1k3ctKSbBwY49h9IjoH9KGH6RI/zq8BylDBqvJ9wyno/qp IUv3RwZeYhK0sWieeIu1X7UarNylQxXDlKivCnDKhn4bx62KBOApARfUIXTkLXXsFn 3rI4E6bpvA/elj8kLNZnsdeON0EYb/TTcZHXFJrgLVFjWNZTsDHQaIIhPnPY6YDHgo 7rJORY6u+zOpHejnqnKv38Ge2nOcCABkagysYa3pxGYsHk/Lvc8HyUvKwE4GThzBZl GffrGstDg5zLMeOPdXBQ0RXicXDZjiJ6nMyN1iFUR0hjCFd9gaaZECI2ylLLzOHNni HKltn+V1I4Cmg== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 41/43] rv64ilp32_abi: tty: Adapt ptr_to_compat Date: Tue, 25 Mar 2025 08:16:22 -0400 Message-Id: <20250325121624.523258-42-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" The RV64ILP32 ABI is based on 64-bit ISA, but BITS_PER_LONG is 32. So, the size of unsigned long is the same as compat_ulong_t and no need "(unsigned long)v.iomem_base >> 32 ? 0xfffffff : ..." detection. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- drivers/tty/tty_io.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c index 449dbd216460..75e256e879d0 100644 --- a/drivers/tty/tty_io.c +++ b/drivers/tty/tty_io.c @@ -2873,8 +2873,12 @@ static int compat_tty_tiocgserial(struct tty_struct *tty, err = tty->ops->get_serial(tty, &v); if (!err) { memcpy(&v32, &v, offsetof(struct serial_struct32, iomem_base)); +#if BITS_PER_LONG == 64 v32.iomem_base = (unsigned long)v.iomem_base >> 32 ? 0xfffffff : ptr_to_compat(v.iomem_base); +#else + v32.iomem_base = ptr_to_compat(v.iomem_base); +#endif v32.iomem_reg_shift = v.iomem_reg_shift; v32.port_high = v.port_high; if (copy_to_user(ss, &v32, sizeof(v32))) From patchwork Tue Mar 25 12:16:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guo Ren X-Patchwork-Id: 876216 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39AC425A2C0; Tue, 25 Mar 2025 12:26:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905616; cv=none; b=KWRmue95ZmwWF08cLkg27KpsG31njfBe+yci+lYzec7Qonx2HOsDt53Ga72Sz13dzG/TcvygT/QcFAmDa3VNELYUdp2scy+AcMXO8+c6kI2ruszhSK4pjQP9IYBvtjFi926PBLnNMVeV7IUhLPLeDdsPNxP6ZMM1JI9OUyB+AGY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1742905616; c=relaxed/simple; bh=9rClAibepR5410hMymY5QyjFNfu9myyrJTkBJ7yJs84=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=RxBua4H0dWDogKpleXnKcVygmGTcbymhQrq32sqktiQaQklcHQ61TGtpRLtm26A721MIhwGdqvKi1WxgLZRdn7RoOBntTlpxkVj2WlF4S22cXDXextvAvTGzsoMwaZwd+Tp/VVulQxzhYQOO45puQ/sH6Xkv4jfObA3/oIP9GkE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=PSXvc+q0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="PSXvc+q0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4AD52C4CEED; Tue, 25 Mar 2025 12:26:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1742905616; bh=9rClAibepR5410hMymY5QyjFNfu9myyrJTkBJ7yJs84=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PSXvc+q07mYxURgabvhC0EDZ07GzmsdGsbFL99gOWwWn2P5z8FZ/BlJ02MvKw0oi0 DebZmNXKJ2CXOdMnsWEWR3OoVOJMtpx5upcoqoGzimuBC7isnkYS1TqP5L1E6cxZU6 4XrYv8tJiQ/yVO+KpHHDW9WJtvg1IAC6n5lPmXBEkrmtDpQIZKAsCw9ZnEWbQ7Gm8D EFr+OQdexB1ClgCWhZJhpf6cX91lUXmdACsoE7/AQufDa2ng7DPJR1k3C71ynpd+D/ KrFEXsFArC7JDeWksZBrKc+/l/OmyTWBnHSzZmaXAgyqRcEOww2gygQgud3qZdqgXr KuA9Abd2ez09A== From: guoren@kernel.org To: arnd@arndb.de, gregkh@linuxfoundation.org, torvalds@linux-foundation.org, paul.walmsley@sifive.com, palmer@dabbelt.com, anup@brainfault.org, atishp@atishpatra.org, oleg@redhat.com, kees@kernel.org, tglx@linutronix.de, will@kernel.org, mark.rutland@arm.com, brauner@kernel.org, akpm@linux-foundation.org, rostedt@goodmis.org, edumazet@google.com, unicorn_wang@outlook.com, inochiama@outlook.com, gaohan@iscas.ac.cn, shihua@iscas.ac.cn, jiawei@iscas.ac.cn, wuwei2016@iscas.ac.cn, drew@pdp7.com, prabhakar.mahadev-lad.rj@bp.renesas.com, ctsai390@andestech.com, wefu@redhat.com, kuba@kernel.org, pabeni@redhat.com, josef@toxicpanda.com, dsterba@suse.com, mingo@redhat.com, peterz@infradead.org, boqun.feng@gmail.com, guoren@kernel.org, xiao.w.wang@intel.com, qingfang.deng@siflower.com.cn, leobras@redhat.com, jszhang@kernel.org, conor.dooley@microchip.com, samuel.holland@sifive.com, yongxuan.wang@sifive.com, luxu.kernel@bytedance.com, david@redhat.com, ruanjinjie@huawei.com, cuiyunhui@bytedance.com, wangkefeng.wang@huawei.com, qiaozhe@iscas.ac.cn Cc: ardb@kernel.org, ast@kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-mm@kvack.org, linux-crypto@vger.kernel.org, bpf@vger.kernel.org, linux-input@vger.kernel.org, linux-perf-users@vger.kernel.org, linux-serial@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, maple-tree@lists.infradead.org, linux-trace-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-atm-general@lists.sourceforge.net, linux-btrfs@vger.kernel.org, netfilter-devel@vger.kernel.org, coreteam@netfilter.org, linux-nfs@vger.kernel.org, linux-sctp@vger.kernel.org, linux-usb@vger.kernel.org, linux-media@vger.kernel.org Subject: [RFC PATCH V3 42/43] rv64ilp32_abi: memfd: Use vm_flag_t Date: Tue, 25 Mar 2025 08:16:23 -0400 Message-Id: <20250325121624.523258-43-guoren@kernel.org> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20250325121624.523258-1-guoren@kernel.org> References: <20250325121624.523258-1-guoren@kernel.org> Precedence: bulk X-Mailing-List: linux-media@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: "Guo Ren (Alibaba DAMO Academy)" RV64ILP32 ABI linux kernel is based on CONFIG_64BIT, and uses unsigned long long as vm_flags_t. Using unsigned long would break rv64ilp32 abi. The definition of vm_flag_t exists, hence its usage is preferred even if it's not essential. Signed-off-by: Guo Ren (Alibaba DAMO Academy) --- include/linux/memfd.h | 4 ++-- mm/memfd.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/include/linux/memfd.h b/include/linux/memfd.h index 246daadbfde8..6f606d9573c3 100644 --- a/include/linux/memfd.h +++ b/include/linux/memfd.h @@ -14,7 +14,7 @@ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx); * We also update VMA flags if appropriate by manipulating the VMA flags pointed * to by vm_flags_ptr. */ -int memfd_check_seals_mmap(struct file *file, unsigned long *vm_flags_ptr); +int memfd_check_seals_mmap(struct file *file, vm_flags_t *vm_flags_ptr); #else static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned int a) { @@ -25,7 +25,7 @@ static inline struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx) return ERR_PTR(-EINVAL); } static inline int memfd_check_seals_mmap(struct file *file, - unsigned long *vm_flags_ptr) + vm_flags_t *vm_flags_ptr) { return 0; } diff --git a/mm/memfd.c b/mm/memfd.c index 37f7be57c2f5..50dad90ffedc 100644 --- a/mm/memfd.c +++ b/mm/memfd.c @@ -332,10 +332,10 @@ static inline bool is_write_sealed(unsigned int seals) return seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE); } -static int check_write_seal(unsigned long *vm_flags_ptr) +static int check_write_seal(vm_flags_t *vm_flags_ptr) { - unsigned long vm_flags = *vm_flags_ptr; - unsigned long mask = vm_flags & (VM_SHARED | VM_WRITE); + vm_flags_t vm_flags = *vm_flags_ptr; + vm_flags_t mask = vm_flags & (VM_SHARED | VM_WRITE); /* If a private matting then writability is irrelevant. */ if (!(mask & VM_SHARED)) @@ -357,7 +357,7 @@ static int check_write_seal(unsigned long *vm_flags_ptr) return 0; } -int memfd_check_seals_mmap(struct file *file, unsigned long *vm_flags_ptr) +int memfd_check_seals_mmap(struct file *file, vm_flags_t *vm_flags_ptr) { int err = 0; unsigned int *seals_ptr = memfd_file_seals_ptr(file);