From patchwork Tue Jun 6 17:58:35 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 103184 Delivered-To: patch@linaro.org Received: by 10.182.29.35 with SMTP id g3csp1387468obh; Tue, 6 Jun 2017 10:58:48 -0700 (PDT) X-Received: by 10.84.214.23 with SMTP id h23mr22983147pli.127.1496771927913; Tue, 06 Jun 2017 10:58:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1496771927; cv=none; d=google.com; s=arc-20160816; b=b3ghCDxdoL1Tv5VmBG8Hu2B2vd2dBI+qUIG5MO/y+DqOQ7/fwJuPN5xfS1e8lZUgLW BC/5gBugVLgxMhbxb7KSyma3SvhWAS7XF82oLunM0ud27wRO8DSB2Vko9bfXAS7MIXg/ c/1KfN+rWEbXnXBY9Z77px+IVmLEexNZq//s1pZ6vmwhnGMNW7sCU5303ezErbFsW+f6 jldnFYrcNGdDGFFObZj45qG20F9yAyijb+/q8LNXLCI7vLaOI8IoeTd8fsWk2zGbZQ/p JGu3DG8bEZVOtj0++vbMH3TZb4FEVZWkU1zxDVUNxvVUEpSuwzZEebgOhWz3SAXkZSrM RURA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=KA0H3YHI5FuXr22YAJgY8Qohko4JsLN3lBav0NPPE4s=; b=zKi37oZn09wPJsY2tKFvs8CAsFc3sa/aQe6nMabbSbKHlE85yOjQiZP511IdY9Gzb6 nIUbNTd4fWF3PQuz5wFbbkUHZa44Yl7itIpzZud8vPX1YpiAdb3vxa4dQwHARKop3beZ YDYAOA6kjVE882ev069qDkODxNX9wp69vHFmKGSde/SfkfFhbmOJl8ccHTdhUejkkoIj m4clS7GoU848QX6B0e4qolCCCN3mp+zYHfT8uvBiD89XfxR94IphXtAiyyHOBBS4VGYc WdDbQ3azU2WVLLo+B8jqRgtk43V61pJ1km784zXSTE+Fo1D4xwdIMYsOA8zTc2JBq8gS 1eNA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v5si6175533plk.12.2017.06.06.10.58.47; Tue, 06 Jun 2017 10:58:47 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751703AbdFFR6e (ORCPT + 25 others); Tue, 6 Jun 2017 13:58:34 -0400 Received: from foss.arm.com ([217.140.101.70]:50632 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751517AbdFFR6a (ORCPT ); Tue, 6 Jun 2017 13:58:30 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 815D215AD; Tue, 6 Jun 2017 10:58:29 -0700 (PDT) Received: from edgewater-inn.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 52FDB3F88C; Tue, 6 Jun 2017 10:58:29 -0700 (PDT) Received: by edgewater-inn.cambridge.arm.com (Postfix, from userid 1000) id E7FA51AE11D5; Tue, 6 Jun 2017 18:58:36 +0100 (BST) From: Will Deacon To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: mark.rutland@arm.com, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, Punit.Agrawal@arm.com, mgorman@suse.de, steve.capper@arm.com, Will Deacon Subject: [PATCH 2/3] mm/page_ref: Ensure page_ref_unfreeze is ordered against prior accesses Date: Tue, 6 Jun 2017 18:58:35 +0100 Message-Id: <1496771916-28203-3-git-send-email-will.deacon@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1496771916-28203-1-git-send-email-will.deacon@arm.com> References: <1496771916-28203-1-git-send-email-will.deacon@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org page_ref_freeze and page_ref_unfreeze are designed to be used as a pair, wrapping a critical section where struct pages can be modified without having to worry about consistency for a concurrent fast-GUP. Whilst page_ref_freeze has full barrier semantics due to its use of atomic_cmpxchg, page_ref_unfreeze is implemented using atomic_set, which doesn't provide any barrier semantics and allows the operation to be reordered with respect to page modifications in the critical section. This patch ensures that page_ref_unfreeze is ordered after any critical section updates, by invoking smp_mb__before_atomic() prior to the atomic_set. Cc: "Kirill A. Shutemov" Acked-by: Steve Capper Signed-off-by: Will Deacon --- include/linux/page_ref.h | 1 + 1 file changed, 1 insertion(+) -- 2.1.4 diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 610e13271918..74d32d7905cb 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -174,6 +174,7 @@ static inline void page_ref_unfreeze(struct page *page, int count) VM_BUG_ON_PAGE(page_count(page) != 0, page); VM_BUG_ON(count == 0); + smp_mb__before_atomic(); atomic_set(&page->_refcount, count); if (page_ref_tracepoint_active(__tracepoint_page_ref_unfreeze)) __page_ref_unfreeze(page, count);