From patchwork Tue Sep 29 11:01:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 291034 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E838FC47425 for ; Tue, 29 Sep 2020 11:55:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BBC2C206A5 for ; Tue, 29 Sep 2020 11:55:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601380526; bh=AjV7tlE3WV/KPZtGiaovV84JTCCzUCgfxYQlAF/Oq6k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=IfN5yk+KNVIrW42CMeX5Un6ESB/2sb80UjlRBK4PTCNuSKRfhIlw1wWwq6PGllNBz Sf3JP0h4kgY6myPsxWqQt2w1iYbu5enxd7e0gIm+yBEWyxSQEH7jtspbV9chRZFPjC WFP6H7LI+98UHo4GDL+AJCqwXI62f9X4W2k9VLDs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728573AbgI2LzZ (ORCPT ); Tue, 29 Sep 2020 07:55:25 -0400 Received: from mail.kernel.org ([198.145.29.99]:44622 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730933AbgI2LpD (ORCPT ); Tue, 29 Sep 2020 07:45:03 -0400 Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 11A872074A; Tue, 29 Sep 2020 11:45:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1601379901; bh=AjV7tlE3WV/KPZtGiaovV84JTCCzUCgfxYQlAF/Oq6k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y95El5Bj/3CQnDF9POhkSfqucME0athT4nN3aPOfjqCRCpXbJLF6WKPr0DL5h4r8L EETy1GxFjebrvs/Zu5AbzoApnZjC8nNetuv5qfYmXChfsmexR/K0CUUMnyR0dlFfYl 3HnaVEk+X79ieKR9+WaXzJPMH2na19hyYZ+2lX/c= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Mikulas Patocka , Andrew Morton , Dan Williams , Jan Kara , Jeff Moyer , Ingo Molnar , Christoph Hellwig , Toshi Kani , "H. Peter Anvin" , Al Viro , Thomas Gleixner , Matthew Wilcox , Ross Zwisler , Ingo Molnar , Linus Torvalds Subject: [PATCH 5.4 367/388] arch/x86/lib/usercopy_64.c: fix __copy_user_flushcache() cache writeback Date: Tue, 29 Sep 2020 13:01:38 +0200 Message-Id: <20200929110028.233217533@linuxfoundation.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200929110010.467764689@linuxfoundation.org> References: <20200929110010.467764689@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Mikulas Patocka commit a1cd6c2ae47ee10ff21e62475685d5b399e2ed4a upstream. If we copy less than 8 bytes and if the destination crosses a cache line, __copy_user_flushcache would invalidate only the first cache line. This patch makes it invalidate the second cache line as well. Fixes: 0aed55af88345b ("x86, uaccess: introduce copy_from_iter_flushcache for pmem / cache-bypass operations") Signed-off-by: Mikulas Patocka Signed-off-by: Andrew Morton Reviewed-by: Dan Williams Cc: Jan Kara Cc: Jeff Moyer Cc: Ingo Molnar Cc: Christoph Hellwig Cc: Toshi Kani Cc: "H. Peter Anvin" Cc: Al Viro Cc: Thomas Gleixner Cc: Matthew Wilcox Cc: Ross Zwisler Cc: Ingo Molnar Cc: Link: https://lkml.kernel.org/r/alpine.LRH.2.02.2009161451140.21915@file01.intranet.prod.int.rdu2.redhat.com Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- arch/x86/lib/usercopy_64.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/arch/x86/lib/usercopy_64.c +++ b/arch/x86/lib/usercopy_64.c @@ -120,7 +120,7 @@ long __copy_user_flushcache(void *dst, c */ if (size < 8) { if (!IS_ALIGNED(dest, 4) || size != 4) - clean_cache_range(dst, 1); + clean_cache_range(dst, size); } else { if (!IS_ALIGNED(dest, 8)) { dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);