From patchwork Mon Mar 19 15:47:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 132095 Delivered-To: patch@linaro.org Received: by 10.46.84.29 with SMTP id i29csp122864ljb; Mon, 19 Mar 2018 15:30:00 -0700 (PDT) X-Google-Smtp-Source: AG47ELuFst8Z7CWN+9rufzvHrND8AE1YqYtQyFypewkuqrbCJpEz+OosTbUczwXZF/FXBN8mAe+p X-Received: by 10.99.116.30 with SMTP id p30mr10348986pgc.60.1521498600264; Mon, 19 Mar 2018 15:30:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521498600; cv=none; d=google.com; s=arc-20160816; b=dxLwFFwqkvkGSmnyjg9G/hzeFSeWu8cnvbtmlLrl1xm67gnIVN15AsFCwCCYqLw75z 5jSK+DFTI/4vT2+soRXGsv1IssuOaEUKqNS36u7mxE8O9nnSQnvn/vc2SOwpnKSDZVIF cSY0r8jRtVobRXw7omD0hVNRvSMxun2AW1QxOVa+72bjnzNFDzpJrzoWnGe6m0RPHNMW JU/hFheF6IPkb3gPOv052jPmVDLEbBuKJOA75+cir+CcE5BUTHy7qBdAfE8elSH2dRf5 vhOfKacSPDy2NNII+NS/ZoGgnwkhQBfyXKCS/53x2CzTEbREGkvuNZErvjSZSKrL4d2T VehA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:content-transfer-encoding :spamdiagnosticmetadata:spamdiagnosticoutput:content-language :accept-language:in-reply-to:references:message-id:date:thread-index :thread-topic:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=9kYPuMUFeGhDq02hbqs31IMFqCISMQAl55wBwTpE/Yg=; b=l8bv/fRtkSQ9RYr0pz+auW4ow8CodLVLojHziA3fQ4qoi/MmcAjOW9i4QEhZgau1k1 qMtqcfSR9XD/rMizvhxv5g5NMNO5gIweWZWey0Gfx/87ppfiKVCguQ1TqzVBjsmGKJ5v Ob4L+WXUoBvFK/PcPVkC4CUNpqKAc9MRus4lJh4mr5cRuvVOhxE3slYD8yqgfjgbsKVV 7gEhJ9hyztoLcLKMff/i/vmMJOwVAr1JTssfH+WKuZGqD/75k3JGXU/nj/EvEFD7xlpe ig3p6GcE0htas5qSN1pPBTcgJbCu02heV1CqTa4/VTB6ECyhIhwMqWk8LIyFMSxbbe20 rdWA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@microsoft.com header.s=selector1 header.b=BUHmYY+r; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=microsoft.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y16si174716pgv.165.2018.03.19.15.30.00; Mon, 19 Mar 2018 15:30:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@microsoft.com header.s=selector1 header.b=BUHmYY+r; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=microsoft.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933828AbeCSW36 (ORCPT + 10 others); Mon, 19 Mar 2018 18:29:58 -0400 Received: from mail-by2nam01on0123.outbound.protection.outlook.com ([104.47.34.123]:15712 "EHLO NAM01-BY2-obe.outbound.protection.outlook.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S933785AbeCSPsi (ORCPT ); Mon, 19 Mar 2018 11:48:38 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=9kYPuMUFeGhDq02hbqs31IMFqCISMQAl55wBwTpE/Yg=; b=BUHmYY+rcx/jBXttMEuuiSyEa8sC1mHLj549uQF9KzQHKrsOeYopwGq5U0WA8qCVKrxz+cfzUNpUkRnnv8v2oYSefbladtsYOef79LmxhqJSgaoVZlsDKYNewNN81GelfDShELpmRKF1S7HOd7BMFtNy8fudmk/gApgFe044ORg= Received: from DM5PR2101MB1032.namprd21.prod.outlook.com (52.132.128.13) by DM5PR2101MB1031.namprd21.prod.outlook.com (52.132.128.12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.631.0; Mon, 19 Mar 2018 15:48:14 +0000 Received: from DM5PR2101MB1032.namprd21.prod.outlook.com ([fe80::3d9b:79e7:94eb:5d62]) by DM5PR2101MB1032.namprd21.prod.outlook.com ([fe80::3d9b:79e7:94eb:5d62%5]) with mapi id 15.20.0631.004; Mon, 19 Mar 2018 15:48:14 +0000 From: Sasha Levin To: "linux-kernel@vger.kernel.org" , "stable@vger.kernel.org" CC: Catalin Marinas , Sasha Levin Subject: [PATCH AUTOSEL for 4.15 053/124] arm64: asid: Do not replace active_asids if already 0 Thread-Topic: [PATCH AUTOSEL for 4.15 053/124] arm64: asid: Do not replace active_asids if already 0 Thread-Index: AQHTv5mlUZNMn7KDvkq0Rkvp79GfjA== Date: Mon, 19 Mar 2018 15:47:55 +0000 Message-ID: <20180319154645.11350-53-alexander.levin@microsoft.com> References: <20180319154645.11350-1-alexander.levin@microsoft.com> In-Reply-To: <20180319154645.11350-1-alexander.levin@microsoft.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [52.168.54.252] x-ms-publictraffictype: Email x-microsoft-exchange-diagnostics: 1; DM5PR2101MB1031; 7:8wgjjLVb8M/Z6LPeoiilaPXZuB+365H/Ma0DI902E46di1jiuiDtxeKXNMA1Riev8lqq92cuZjnbt6crmrbgpOkOlKaCDdEQ9b+/KScd+mjtXObopY/s2ZHusz+QZIGdyY2dDhCXG0WS3GgLJ4r1fHUhgUUlxgvIvv4l/RYsMklfvWSxnOJKZNBeaH3g+aoCaW5kRhOUXNKIvEa45mvVLcSAdGE7veGZs31to4Fit7i9/BP107ygDNI8gBoebjWy; 20:+ZZvaYQsBlQjekrshXGtl6a7FdYTn8Yn9q4Nk/8bMTuKY30zr4HPLs7PimvWMgjOmiHosfpqtIvj1UnWlMNIXxzMreej9Qv1t927A86IOK39nZ4kZGjwbeXppis36NkhWWad96cFuq5z1Q4pXLTanwt993BUNitdzRQoGNo4XhU= x-ms-office365-filtering-ht: Tenant x-ms-office365-filtering-correlation-id: ba615d28-2717-4ea6-fcee-08d58db0d389 x-microsoft-antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020095)(4652020)(48565401081)(5600026)(4604075)(3008032)(4534165)(4627221)(201703031133081)(201702281549075)(2017052603328)(7193020); SRVR:DM5PR2101MB1031; x-ms-traffictypediagnostic: DM5PR2101MB1031: authentication-results: spf=none (sender IP is ) smtp.mailfrom=Alexander.Levin@microsoft.com; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:(28532068793085)(180628864354917)(89211679590171)(788757137089)(84791874153150); x-exchange-antispam-report-cfa-test: BCL:0; PCL:0; RULEID:(8211001083)(61425038)(6040522)(2401047)(8121501046)(5005006)(93006095)(93001095)(10201501046)(3231221)(944501300)(52105095)(3002001)(6055026)(61426038)(61427038)(6041310)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(20161123562045)(20161123558120)(20161123560045)(6072148)(201708071742011); SRVR:DM5PR2101MB1031; BCL:0; PCL:0; RULEID:; SRVR:DM5PR2101MB1031; x-forefront-prvs: 06167FAD59 x-forefront-antispam-report: SFV:NSPM; SFS:(10019020)(366004)(346002)(396003)(39860400002)(39380400002)(376002)(189003)(199004)(6436002)(6512007)(6666003)(3280700002)(2950100002)(7736002)(6116002)(3846002)(10090500001)(105586002)(4326008)(5660300001)(305945005)(1076002)(81156014)(6306002)(2906002)(6486002)(81166006)(8676002)(8936002)(2900100001)(106356001)(36756003)(68736007)(54906003)(3660700001)(22452003)(97736004)(6506007)(25786009)(59450400001)(99286004)(10290500003)(76176011)(66066001)(14454004)(72206003)(478600001)(102836004)(26005)(186003)(316002)(2501003)(110136005)(5250100002)(575784001)(86362001)(53936002)(107886003)(86612001)(966005)(22906009)(87944003)(217873001); DIR:OUT; SFP:1102; SCL:1; SRVR:DM5PR2101MB1031; H:DM5PR2101MB1032.namprd21.prod.outlook.com; FPR:; SPF:None; LANG:en; PTR:InfoNoRecords; MX:1; A:1; received-spf: None (protection.outlook.com: microsoft.com does not designate permitted sender hosts) x-microsoft-antispam-message-info: YmZE9Ax43UA/s6B2Ec6GvniuV/fRoZO8CMCU3puNNRzYkzaiEThYbEdxr/91kF75bYhhchDLwbpIpfjySeuxdUuk0T1q8efjvHHODASZDOd2ZhADyDNJgnI+1lZPvg4EKSom6iJM/u6EGmCASG8nAHdibR5r1jb2CsX49QvVbqOX+9W8OXadb7Wu0MxnkhVeIoyrY8W3c9a8UxUAasnRbu1BI+dFlKH8VWsUDy+EumF0yFu09mCcfuBXl1PmARxu0Fk1vZUenHpVsemfFCI8uRQQYN1ebLiN47lTjcMxMByVhAwRmSa+5DK2qK79fW28c76YqrYSaW6WcgLeOnSJNQ== spamdiagnosticoutput: 1:99 spamdiagnosticmetadata: NSPM MIME-Version: 1.0 X-OriginatorOrg: microsoft.com X-MS-Exchange-CrossTenant-Network-Message-Id: ba615d28-2717-4ea6-fcee-08d58db0d389 X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Mar 2018 15:47:55.7588 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 72f988bf-86f1-41af-91ab-2d7cd011db47 X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR2101MB1031 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Catalin Marinas [ Upstream commit a8ffaaa060b8d4da6138e0958cb0f45b73e1cb78 ] Under some uncommon timing conditions, a generation check and xchg(active_asids, A1) in check_and_switch_context() on P1 can race with an ASID roll-over on P2. If P2 has not seen the update to active_asids[P1], it can re-allocate A1 to a new task T2 on P2. P1 ends up waiting on the spinlock since the xchg() returned 0 while P2 can go through a second ASID roll-over with (T2,A1,G2) active on P2. This roll-over copies active_asids[P1] == A1,G1 into reserved_asids[P1] and active_asids[P2] == A1,G2 into reserved_asids[P2]. A subsequent scheduling of T1 on P1 and T2 on P2 would match reserved_asids and get their generation bumped to G3: P1 P2 -- -- TTBR0.BADDR = T0 TTBR0.ASID = A0 asid_generation = G1 check_and_switch_context(T1,A1,G1) generation match check_and_switch_context(T2,A0,G0) new_context() ASID roll-over asid_generation = G2 flush_context() active_asids[P1] = 0 asid_map[A1] = 0 reserved_asids[P1] = A0,G0 xchg(active_asids, A1) active_asids[P1] = A1,G1 xchg returns 0 spin_lock_irqsave() allocated ASID (T2,A1,G2) asid_map[A1] = 1 active_asids[P2] = A1,G2 ... check_and_switch_context(T3,A0,G0) new_context() ASID roll-over asid_generation = G3 flush_context() active_asids[P1] = 0 asid_map[A1] = 1 reserved_asids[P1] = A1,G1 reserved_asids[P2] = A1,G2 allocated ASID (T3,A2,G3) asid_map[A2] = 1 active_asids[P2] = A2,G3 new_context() check_update_reserved_asid(A1,G1) matches reserved_asid[P1] reserved_asid[P1] = A1,G3 updated T1 ASID to (T1,A1,G3) check_and_switch_context(T2,A1,G2) new_context() check_and_switch_context(A1,G2) matches reserved_asids[P2] reserved_asids[P2] = A1,G3 updated T2 ASID to (T2,A1,G3) At this point, we have two tasks, T1 and T2 both using ASID A1 with the latest generation G3. Any of them is allowed to be scheduled on the other CPU leading to two different tasks with the same ASID on the same CPU. This patch changes the xchg to cmpxchg so that the active_asids is only updated if non-zero to avoid a race with an ASID roll-over on a different CPU. The ASID allocation algorithm has been formally verified using the TLA+ model checker (see https://git.kernel.org/pub/scm/linux/kernel/git/cmarinas/kernel-tla.git/tree/asidalloc.tla for the spec). Reviewed-by: Will Deacon Signed-off-by: Catalin Marinas Signed-off-by: Sasha Levin --- arch/arm64/mm/context.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) -- 2.14.1 diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c index b1ac80fba578..301417ae2ba8 100644 --- a/arch/arm64/mm/context.c +++ b/arch/arm64/mm/context.c @@ -194,26 +194,29 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu) void check_and_switch_context(struct mm_struct *mm, unsigned int cpu) { unsigned long flags; - u64 asid; + u64 asid, old_active_asid; asid = atomic64_read(&mm->context.id); /* * The memory ordering here is subtle. - * If our ASID matches the current generation, then we update - * our active_asids entry with a relaxed xchg. Racing with a - * concurrent rollover means that either: + * If our active_asids is non-zero and the ASID matches the current + * generation, then we update the active_asids entry with a relaxed + * cmpxchg. Racing with a concurrent rollover means that either: * - * - We get a zero back from the xchg and end up waiting on the + * - We get a zero back from the cmpxchg and end up waiting on the * lock. Taking the lock synchronises with the rollover and so * we are forced to see the updated generation. * - * - We get a valid ASID back from the xchg, which means the + * - We get a valid ASID back from the cmpxchg, which means the * relaxed xchg in flush_context will treat us as reserved * because atomic RmWs are totally ordered for a given location. */ - if (!((asid ^ atomic64_read(&asid_generation)) >> asid_bits) - && atomic64_xchg_relaxed(&per_cpu(active_asids, cpu), asid)) + old_active_asid = atomic64_read(&per_cpu(active_asids, cpu)); + if (old_active_asid && + !((asid ^ atomic64_read(&asid_generation)) >> asid_bits) && + atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu), + old_active_asid, asid)) goto switch_mm_fastpath; raw_spin_lock_irqsave(&cpu_asid_lock, flags);