From patchwork Mon Aug 3 09:52:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jiri Slaby X-Patchwork-Id: 51859 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f71.google.com (mail-la0-f71.google.com [209.85.215.71]) by patches.linaro.org (Postfix) with ESMTPS id EF2DA229FD for ; Mon, 3 Aug 2015 09:59:07 +0000 (UTC) Received: by labby2 with SMTP id by2sf25596275lab.3 for ; Mon, 03 Aug 2015 02:59:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=wa2NFx35pFnPcsG3iSBsS1SNrn/W7GQk7Jxcv+PSWtI=; b=XjUkHjxaPKK2hnr/S1kA+9IlMzs8NJLl2Y/BWLK5lseNjVBo0Ej1rIbQ3u1/CQ/JWR POxAgTH6zWihBxGwGaXVUDVLj+i7fI1oJeMFrO3RvGPWqmu4uWXTq4ItQoyXd5Z/GG1I RJikI1SjOXwLULTSvVRTPFIWX0n4V9PAG+Y8yLr+fVdC1k1bgLrcwxwyNddfe6+f25pI 1iaiMm5acaWkddhZzrlocMjyYDZ4BiaCk9ftNEv0tRboBrh83Dh+3Q2Wqa6mfENa/28r uxd8slbkTuFx22w0+eGBCKSgPL8lF218QfUNaco2eTITquy4uHrB6sAph8/ZjwRkAZ2/ Zybg== X-Gm-Message-State: ALoCoQmdMjok11009LBRCDzca7c/98DHE1BSM4NhxP9G3VWeuCIXqfTzbfph0wekHWt2pH11k1MW X-Received: by 10.180.79.10 with SMTP id f10mr5585182wix.3.1438595946907; Mon, 03 Aug 2015 02:59:06 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.43.2 with SMTP id s2ls497970lal.86.gmail; Mon, 03 Aug 2015 02:59:06 -0700 (PDT) X-Received: by 10.152.203.162 with SMTP id kr2mr15992886lac.57.1438595946777; Mon, 03 Aug 2015 02:59:06 -0700 (PDT) Received: from mail-la0-f49.google.com (mail-la0-f49.google.com. [209.85.215.49]) by mx.google.com with ESMTPS id lv6si11517990lac.57.2015.08.03.02.59.06 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Aug 2015 02:59:06 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) client-ip=209.85.215.49; Received: by labsr2 with SMTP id sr2so7691149lab.2 for ; Mon, 03 Aug 2015 02:59:06 -0700 (PDT) X-Received: by 10.112.219.70 with SMTP id pm6mr15588174lbc.41.1438595946673; Mon, 03 Aug 2015 02:59:06 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.7.198 with SMTP id l6csp1772637lba; Mon, 3 Aug 2015 02:59:05 -0700 (PDT) X-Received: by 10.68.218.136 with SMTP id pg8mr22053776pbc.169.1438595623225; Mon, 03 Aug 2015 02:53:43 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id pp5si11990900pbb.192.2015.08.03.02.53.41; Mon, 03 Aug 2015 02:53:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753139AbbHCJxM (ORCPT + 1 other); Mon, 3 Aug 2015 05:53:12 -0400 Received: from mx2.suse.de ([195.135.220.15]:33429 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753140AbbHCJxI (ORCPT ); Mon, 3 Aug 2015 05:53:08 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 693A3ACBD; Mon, 3 Aug 2015 09:53:07 +0000 (UTC) From: Jiri Slaby To: stable@vger.kernel.org Cc: Christoffer Dall , Catalin Marinas , Jiri Slaby Subject: [patch added to the 3.12 stable tree] arm64: Don't report clear pmds and puds as huge Date: Mon, 3 Aug 2015 11:52:36 +0200 Message-Id: <1438595559-22252-49-git-send-email-jslaby@suse.cz> X-Mailer: git-send-email 2.5.0 In-Reply-To: <1438595559-22252-1-git-send-email-jslaby@suse.cz> References: <1438595559-22252-1-git-send-email-jslaby@suse.cz> Sender: stable-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: stable@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: patch@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.49 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , From: Christoffer Dall This patch has been added to the 3.12 stable tree. If you have any objections, please let us know. =============== commit fd28f5d439fca77348c129d5b73043a56f8a0296 upstream. The current pmd_huge() and pud_huge() functions simply check if the table bit is not set and reports the entries as huge in that case. This is counter-intuitive as a clear pmd/pud cannot also be a huge pmd/pud, and it is inconsistent with at least arm and x86. To prevent others from making the same mistake as me in looking at code that calls these functions and to fix an issue with KVM on arm64 that causes memory corruption due to incorrect page reference counting resulting from this mistake, let's change the behavior. Signed-off-by: Christoffer Dall Reviewed-by: Steve Capper Acked-by: Marc Zyngier Fixes: 084bd29810a5 ("ARM64: mm: HugeTLB support.") Signed-off-by: Catalin Marinas Signed-off-by: Jiri Slaby --- arch/arm64/mm/hugetlbpage.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 2de9d2e59d96..0eeb4f0930a0 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -40,13 +40,13 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep) int pmd_huge(pmd_t pmd) { - return !(pmd_val(pmd) & PMD_TABLE_BIT); + return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT); } int pud_huge(pud_t pud) { #ifndef __PAGETABLE_PMD_FOLDED - return !(pud_val(pud) & PUD_TABLE_BIT); + return pud_val(pud) && !(pud_val(pud) & PUD_TABLE_BIT); #else return 0; #endif