From patchwork Wed Nov 16 10:26:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 625378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B45B7C4167B for ; Wed, 16 Nov 2022 10:39:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237952AbiKPKjx (ORCPT ); Wed, 16 Nov 2022 05:39:53 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233719AbiKPKiV (ORCPT ); Wed, 16 Nov 2022 05:38:21 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0456E2ED49 for ; Wed, 16 Nov 2022 02:29:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1668594563; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1lgRYK0+yPuNJDFPZ9ndbDbQCvjtc3Xp7NDAue46xBY=; b=L1TyFbfCXbqBdbw/Ata8BNnNCKvCKDgcYVKD+f2j98zyN+SYhX6WNRxAlAeTjWx455vjAf o4fElTgPkQyi7MuY/YKbK63jZmD6/TWaVjnczrtrpQ/4I8S/tzSxneDFuv/FNO7AILmXm0 2o477lpLT1xLKztjRUftIjDL6JXNwUI= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-14-svSpfQNhOguMSNDl-MdQSA-1; Wed, 16 Nov 2022 05:29:18 -0500 X-MC-Unique: svSpfQNhOguMSNDl-MdQSA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 724BE86F147; Wed, 16 Nov 2022 10:29:15 +0000 (UTC) Received: from t480s.fritz.box (unknown [10.39.193.216]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3626A2028E8F; Wed, 16 Nov 2022 10:29:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: x86@kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, etnaviv@lists.freedesktop.org, dri-devel@lists.freedesktop.org, linux-samsung-soc@vger.kernel.org, linux-rdma@vger.kernel.org, linux-media@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-perf-users@vger.kernel.org, linux-security-module@vger.kernel.org, linux-kselftest@vger.kernel.org, Linus Torvalds , Andrew Morton , Jason Gunthorpe , John Hubbard , Peter Xu , Greg Kroah-Hartman , Andrea Arcangeli , Hugh Dickins , Nadav Amit , Vlastimil Babka , Matthew Wilcox , Mike Kravetz , Muchun Song , Shuah Khan , Lucas Stach , David Airlie , Oded Gabbay , Arnd Bergmann , Christoph Hellwig , Alex Williamson , David Hildenbrand , Andy Walls , Mauro Carvalho Chehab Subject: [PATCH mm-unstable v1 15/20] media: pci/ivtv: remove FOLL_FORCE usage Date: Wed, 16 Nov 2022 11:26:54 +0100 Message-Id: <20221116102659.70287-16-david@redhat.com> In-Reply-To: <20221116102659.70287-1-david@redhat.com> References: <20221116102659.70287-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.4 Precedence: bulk List-ID: X-Mailing-List: linux-samsung-soc@vger.kernel.org FOLL_FORCE is really only for ptrace access. R/O pinning a page is supposed to fail if the VMA misses proper access permissions (no VM_READ). Let's just remove FOLL_FORCE usage here; there would have to be a pretty good reason to allow arbitrary drivers to R/O pin pages in a PROT_NONE VMA. Most probably, FOLL_FORCE usage is just some legacy leftover. Cc: Andy Walls Cc: Mauro Carvalho Chehab Signed-off-by: David Hildenbrand Acked-by: Hans Verkuil --- drivers/media/pci/ivtv/ivtv-udma.c | 2 +- drivers/media/pci/ivtv/ivtv-yuv.c | 5 ++--- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/media/pci/ivtv/ivtv-udma.c b/drivers/media/pci/ivtv/ivtv-udma.c index 210be8290f24..99b9f55ca829 100644 --- a/drivers/media/pci/ivtv/ivtv-udma.c +++ b/drivers/media/pci/ivtv/ivtv-udma.c @@ -115,7 +115,7 @@ int ivtv_udma_setup(struct ivtv *itv, unsigned long ivtv_dest_addr, /* Pin user pages for DMA Xfer */ err = pin_user_pages_unlocked(user_dma.uaddr, user_dma.page_count, - dma->map, FOLL_FORCE); + dma->map, 0); if (user_dma.page_count != err) { IVTV_DEBUG_WARN("failed to map user pages, returned %d instead of %d\n", diff --git a/drivers/media/pci/ivtv/ivtv-yuv.c b/drivers/media/pci/ivtv/ivtv-yuv.c index 4ba10c34a16a..582146f8d70d 100644 --- a/drivers/media/pci/ivtv/ivtv-yuv.c +++ b/drivers/media/pci/ivtv/ivtv-yuv.c @@ -63,12 +63,11 @@ static int ivtv_yuv_prep_user_dma(struct ivtv *itv, struct ivtv_user_dma *dma, /* Pin user pages for DMA Xfer */ y_pages = pin_user_pages_unlocked(y_dma.uaddr, - y_dma.page_count, &dma->map[0], FOLL_FORCE); + y_dma.page_count, &dma->map[0], 0); uv_pages = 0; /* silence gcc. value is set and consumed only if: */ if (y_pages == y_dma.page_count) { uv_pages = pin_user_pages_unlocked(uv_dma.uaddr, - uv_dma.page_count, &dma->map[y_pages], - FOLL_FORCE); + uv_dma.page_count, &dma->map[y_pages], 0); } if (y_pages != y_dma.page_count || uv_pages != uv_dma.page_count) {