From patchwork Tue Jan 5 09:28:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 357444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49E2FC4332B for ; Tue, 5 Jan 2021 09:28:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0461622AAC for ; Tue, 5 Jan 2021 09:28:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728087AbhAEJ2p (ORCPT ); Tue, 5 Jan 2021 04:28:45 -0500 Received: from mail.kernel.org ([198.145.29.99]:48954 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728109AbhAEJ2p (ORCPT ); Tue, 5 Jan 2021 04:28:45 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id E4441225AC; Tue, 5 Jan 2021 09:28:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1609838884; bh=vkc6pN/DQ/bhfM6izeTJVgwQ3DxZVYt5WdA+r1Eyhco=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IkwEukSTDXIzxZQ9URYQ8U/0kFzkSMAa1myr/ffgr80yh+XdmpQUS8fzZWEkuNzD0 fGqn5wF9j+AXdbeXDwvxvVQB7JZ2yJr8jG+q8EStNgAkt+yys5TiUu7TUkygif8jA3 iXLgby6ffC4Mb7kc/f3rsZuk6FmYFzMUzykRHRPI= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Boris Ostrovsky , Souptick Joarder , John Hubbard , Juergen Gross , David Vrabel , Jinoh Kang Subject: [PATCH 4.19 12/29] xen/gntdev.c: Mark pages as dirty Date: Tue, 5 Jan 2021 10:28:58 +0100 Message-Id: <20210105090820.118582071@linuxfoundation.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210105090818.518271884@linuxfoundation.org> References: <20210105090818.518271884@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Souptick Joarder commit 779055842da5b2e508f3ccf9a8153cb1f704f566 upstream. There seems to be a bug in the original code when gntdev_get_page() is called with writeable=true then the page needs to be marked dirty before being put. To address this, a bool writeable is added in gnt_dev_copy_batch, set it in gntdev_grant_copy_seg() (and drop `writeable` argument to gntdev_get_page()) and then, based on batch->writeable, use set_page_dirty_lock(). Fixes: a4cdb556cae0 (xen/gntdev: add ioctl for grant copy) Suggested-by: Boris Ostrovsky Signed-off-by: Souptick Joarder Cc: John Hubbard Cc: Boris Ostrovsky Cc: Juergen Gross Cc: David Vrabel Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/1599375114-32360-1-git-send-email-jrdr.linux@gmail.com Reviewed-by: Boris Ostrovsky Signed-off-by: Boris Ostrovsky [jinoh: backport accounting for missing commit 73b0140bf0fe ("mm/gup: change GUP fast to use flags rather than a write 'bool'")] Signed-off-by: Jinoh Kang Signed-off-by: Greg Kroah-Hartman --- drivers/xen/gntdev.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) --- a/drivers/xen/gntdev.c +++ b/drivers/xen/gntdev.c @@ -842,17 +842,18 @@ struct gntdev_copy_batch { s16 __user *status[GNTDEV_COPY_BATCH]; unsigned int nr_ops; unsigned int nr_pages; + bool writeable; }; static int gntdev_get_page(struct gntdev_copy_batch *batch, void __user *virt, - bool writeable, unsigned long *gfn) + unsigned long *gfn) { unsigned long addr = (unsigned long)virt; struct page *page; unsigned long xen_pfn; int ret; - ret = get_user_pages_fast(addr, 1, writeable, &page); + ret = get_user_pages_fast(addr, 1, batch->writeable, &page); if (ret < 0) return ret; @@ -868,9 +869,13 @@ static void gntdev_put_pages(struct gntd { unsigned int i; - for (i = 0; i < batch->nr_pages; i++) + for (i = 0; i < batch->nr_pages; i++) { + if (batch->writeable && !PageDirty(batch->pages[i])) + set_page_dirty_lock(batch->pages[i]); put_page(batch->pages[i]); + } batch->nr_pages = 0; + batch->writeable = false; } static int gntdev_copy(struct gntdev_copy_batch *batch) @@ -959,8 +964,9 @@ static int gntdev_grant_copy_seg(struct virt = seg->source.virt + copied; off = (unsigned long)virt & ~XEN_PAGE_MASK; len = min(len, (size_t)XEN_PAGE_SIZE - off); + batch->writeable = false; - ret = gntdev_get_page(batch, virt, false, &gfn); + ret = gntdev_get_page(batch, virt, &gfn); if (ret < 0) return ret; @@ -978,8 +984,9 @@ static int gntdev_grant_copy_seg(struct virt = seg->dest.virt + copied; off = (unsigned long)virt & ~XEN_PAGE_MASK; len = min(len, (size_t)XEN_PAGE_SIZE - off); + batch->writeable = true; - ret = gntdev_get_page(batch, virt, true, &gfn); + ret = gntdev_get_page(batch, virt, &gfn); if (ret < 0) return ret;