From patchwork Tue Nov 27 19:45:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 152161 Delivered-To: patch@linaro.org Received: by 2002:a2e:299d:0:0:0:0:0 with SMTP id p29-v6csp189036ljp; Tue, 27 Nov 2018 11:44:56 -0800 (PST) X-Google-Smtp-Source: AFSGD/VKL7CMX26dUg43vTBiFf1s8vy5TvC5Uz3l/14oqVfDPsAP2klSKoY5oaAbp4nrrOpzILok X-Received: by 2002:a63:5265:: with SMTP id s37mr30385723pgl.326.1543347895931; Tue, 27 Nov 2018 11:44:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543347895; cv=none; d=google.com; s=arc-20160816; b=EXd046GhEVCPaCDbceb+4/YsjOEHvBvebJJSWp4el0jMfYCMf8+FZjdPE7Ov4Xa9Zf uG1EfDBvuWsAbZq3Jdp6muonphnMFvgrD6EZL/tUoIWzdEJf5mheT9jHcjE1QjEInsiu HKHGdGpDDKG/3gKkloL7XpEIIiAH/sh6YkY9YWKwKlbx6t8fpLV6C7vVBu5MewtuQZZY pE5fES0b4yaM6yrRqGLZloKEbYmrf8AYdoeiF62Nc9qcRum9afuF5tQHzcoC5kYu+z7Q 1Y2sKLkkH6ikt17uP5vXJzNpMM073Lvx3eUwgjnoLLdHc8dQ1fl1T9yDuXr+75nOC4U3 /GyA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=eoHONv4104lw4a71dQAgh5Jgn78PR4qveiZp/6jIO/E=; b=KRmZPGOoE8fdgGeHfkdjkNCKDJ0sIdfI0rZC0bIxGsDBic3L51Eo27DAZoB0jFrF2P jnF163E2MsszQ8VqG1WucfUELJou+0RLyxFKEjLZVTY8PBvKV7SK2PIjDUN17wjnIK8f 7bqD+3a8KuuJKNATZ2YHLzMWalPh9gp9m3KnTfQytLvUx6QWysVKtE+fufDIlcVZFZDO iLxunqCB4RbgFJb5Uh0Uzed6afzR801AX5JGZ5Tvj00dTJlefa2DJ0rDQKKh5gQeGVXs 1qYqNv6U/6YixplyFX4WjB0iqMmK4yZKIFUY83G0/u3/kEy0XwwjDvqCbqc08Y+0zG9d f4Fw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id z20si4436598pgv.159.2018.11.27.11.44.55; Tue, 27 Nov 2018 11:44:55 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726425AbeK1Gnv (ORCPT + 32 others); Wed, 28 Nov 2018 01:43:51 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:46952 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726244AbeK1Gnn (ORCPT ); Wed, 28 Nov 2018 01:43:43 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 351E336BA; Tue, 27 Nov 2018 11:44:45 -0800 (PST) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 055C73F777; Tue, 27 Nov 2018 11:44:45 -0800 (PST) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id D5DC11AE0B68; Tue, 27 Nov 2018 19:45:02 +0000 (GMT) From: Will Deacon To: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, ard.biesheuvel@linaro.org, catalin.marinas@arm.com, rml@tech9.net, tglx@linutronix.de, peterz@infradead.org, schwidefsky@de.ibm.com, Will Deacon Subject: [PATCH 2/2] arm64: preempt: Provide our own implementation of asm/preempt.h Date: Tue, 27 Nov 2018 19:45:02 +0000 Message-Id: <1543347902-21170-3-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1543347902-21170-1-git-send-email-will.deacon@arm.com> References: <1543347902-21170-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The asm-generic/preempt.h implementation doesn't make use of the PREEMPT_NEED_RESCHED flag, since this can interact badly with load/store architectures which rely on the preempt_count word being unchanged across an interrupt. However, since we're a 64-bit architecture and the preempt count is only 32 bits wide, we can simply pack it next to the resched flag and load the whole thing in one go, so that a dec-and-test operation doesn't need to load twice. Signed-off-by: Will Deacon --- arch/arm64/include/asm/Kbuild | 1 - arch/arm64/include/asm/preempt.h | 78 ++++++++++++++++++++++++++++++++++++ arch/arm64/include/asm/thread_info.h | 13 +++++- 3 files changed, 90 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/include/asm/preempt.h -- 2.1.4 diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index 6cd5d77b6b44..33498f900390 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -14,7 +14,6 @@ generic-y += local64.h generic-y += mcs_spinlock.h generic-y += mm-arch-hooks.h generic-y += msi.h -generic-y += preempt.h generic-y += qrwlock.h generic-y += qspinlock.h generic-y += rwsem.h diff --git a/arch/arm64/include/asm/preempt.h b/arch/arm64/include/asm/preempt.h new file mode 100644 index 000000000000..832227d5ebc0 --- /dev/null +++ b/arch/arm64/include/asm/preempt.h @@ -0,0 +1,78 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_PREEMPT_H +#define __ASM_PREEMPT_H + +#include + +#define PREEMPT_NEED_RESCHED BIT(32) +#define PREEMPT_ENABLED (PREEMPT_NEED_RESCHED) + +static inline int preempt_count(void) +{ + return READ_ONCE(current_thread_info()->preempt.count); +} + +static inline void preempt_count_set(u64 pc) +{ + /* Preserve existing value of PREEMPT_NEED_RESCHED */ + WRITE_ONCE(current_thread_info()->preempt.count, pc); +} + +#define init_task_preempt_count(p) do { \ + task_thread_info(p)->preempt_count = FORK_PREEMPT_COUNT; \ +} while (0) + +#define init_idle_preempt_count(p, cpu) do { \ + task_thread_info(p)->preempt_count = PREEMPT_ENABLED; \ +} while (0) + +static inline void set_preempt_need_resched(void) +{ + current_thread_info()->preempt.need_resched = 0; +} + +static inline void clear_preempt_need_resched(void) +{ + current_thread_info()->preempt.need_resched = 1; +} + +static inline bool test_preempt_need_resched(void) +{ + return !current_thread_info()->preempt.need_resched; +} + +static inline void __preempt_count_add(int val) +{ + u32 pc = READ_ONCE(current_thread_info()->preempt.count); + pc += val; + WRITE_ONCE(current_thread_info()->preempt.count, pc); +} + +static inline void __preempt_count_sub(int val) +{ + u32 pc = READ_ONCE(current_thread_info()->preempt.count); + pc -= val; + WRITE_ONCE(current_thread_info()->preempt.count, pc); +} + +static inline bool __preempt_count_dec_and_test(void) +{ + u64 pc = READ_ONCE(current_thread_info()->preempt_count); + WRITE_ONCE(current_thread_info()->preempt.count, --pc); + return !pc; +} + +static inline bool should_resched(int preempt_offset) +{ + u64 pc = READ_ONCE(current_thread_info()->preempt_count); + return pc == preempt_offset; +} + +#ifdef CONFIG_PREEMPT +void preempt_schedule(void); +#define __preempt_schedule() preempt_schedule() +void preempt_schedule_notrace(void); +#define __preempt_schedule_notrace() preempt_schedule_notrace() +#endif /* CONFIG_PREEMPT */ + +#endif /* __ASM_PREEMPT_H */ diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index cb2c10a8f0a8..bbca68b54732 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -42,7 +42,18 @@ struct thread_info { #ifdef CONFIG_ARM64_SW_TTBR0_PAN u64 ttbr0; /* saved TTBR0_EL1 */ #endif - int preempt_count; /* 0 => preemptable, <0 => bug */ + union { + u64 preempt_count; /* 0 => preemptible, <0 => bug */ + struct { +#ifdef CONFIG_CPU_BIG_ENDIAN + u32 need_resched; + u32 count; +#else + u32 count; + u32 need_resched; +#endif + } preempt; + }; }; #define thread_saved_pc(tsk) \