From patchwork Wed May 10 08:55:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruihan Li X-Patchwork-Id: 681145 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 136ECC77B7C for ; Wed, 10 May 2023 09:04:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236975AbjEJJET (ORCPT ); Wed, 10 May 2023 05:04:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55938 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236947AbjEJJEM (ORCPT ); Wed, 10 May 2023 05:04:12 -0400 Received: from pku.edu.cn (mx18.pku.edu.cn [162.105.129.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 7A6A230C7; Wed, 10 May 2023 02:03:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pku.edu.cn; s=dkim; h=Received:From:To:Cc:Subject:Date: Message-Id:In-Reply-To:References:MIME-Version: Content-Transfer-Encoding; bh=FCdoZypKOCF7Yar38FzSX/85JDvNvpncqZ ++kam1a4o=; b=jvCCbw1WH+LNsraRFmaFWBu79Plkic4etlcBNT2Q9/Pu+Ozgbj 5yIQu+HzX+Df+l7RnwSSM/X2QK4J727QlDceCkFh0FJZTq57YTKR1/5mVne2XE0l REqck6axoRwbXR0l1nvzX5Hin4BdXaYw97JAI5ZYCwuvCumiC2g04qrys= Received: from localhost.localdomain (unknown [10.7.101.92]) by front01 (Coremail) with SMTP id 5oFpogBnb2cIXFtkW9d5Ag--.63159S3; Wed, 10 May 2023 16:55:42 +0800 (CST) From: Ruihan Li To: linux-mm@kvack.org Cc: linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org, Pasha Tatashin , David Hildenbrand , Matthew Wilcox , Andrew Morton , Christoph Hellwig , Greg Kroah-Hartman , Ruihan Li , syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.comm, stable@vger.kernel.org Subject: [PATCH 1/4] usb: usbfs: Enforce page requirements for mmap Date: Wed, 10 May 2023 16:55:24 +0800 Message-Id: <20230510085527.57953-2-lrh2000@pku.edu.cn> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230510085527.57953-1-lrh2000@pku.edu.cn> References: <20230510085527.57953-1-lrh2000@pku.edu.cn> MIME-Version: 1.0 X-CM-TRANSID: 5oFpogBnb2cIXFtkW9d5Ag--.63159S3 X-Coremail-Antispam: 1UD129KBjvJXoWxXry7Gr1fXw4xWw13tF13urg_yoWrKF4kpF sxWr15CrW5tryxXrnxKFs8Za4Yvws5uFyUKrWxu3s3uF13J3sI9Fn5AFy5ZF1rJr4vgr1f tFn0krWYk3WUG37anT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBF1xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s1l1IIY67AE w4v_Jr0_Jr4l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2 IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW0oVCq3wA2z4x0Y4vEx4A2 jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAac4AC62xK8xCEY4 vEwIxC4wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv 7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r 1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02 628vn2kIc2xKxwCY02Avz4vE-syl42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8V WkJr1UJwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E 7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcV C0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF 04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7 CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUQZ23UUUUU= X-CM-SenderInfo: yssqiiarrvmko6sn3hxhgxhubq/1tbiAgEEBVPy770DbwAWsL Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The current implementation of usbdev_mmap uses usb_alloc_coherent to allocate memory pages that will later be mapped into the user space. Meanwhile, usb_alloc_coherent employs three different methods to allocate memory, as outlined below: * If hcd->localmem_pool is non-null, it uses gen_pool_dma_alloc to allocate memory. * If DMA is not available, it uses kmalloc to allocate memory. * Otherwise, it uses dma_alloc_coherent. However, it should be noted that gen_pool_dma_alloc does not guarantee that the resulting memory will be page-aligned. Furthermore, trying to map slab pages (i.e., memory allocated by kmalloc) into the user space is not resonable and can lead to problems, such as a type confusion bug when PAGE_TABLE_CHECK=y [1]. To address these issues, this patch introduces hcd_alloc_coherent_pages, which addresses the above two problems. Specifically, hcd_alloc_coherent_pages uses gen_pool_dma_alloc_align instead of gen_pool_dma_alloc to ensure that the memory is page-aligned. To replace kmalloc, hcd_alloc_coherent_pages directly allocates pages by calling __get_free_pages. Reported-by: syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.comm Closes: https://lore.kernel.org/lkml/000000000000258e5e05fae79fc1@google.com/ [1] Cc: stable@vger.kernel.org Signed-off-by: Ruihan Li --- drivers/usb/core/buffer.c | 41 +++++++++++++++++++++++++++++++++++++++ drivers/usb/core/devio.c | 9 +++++---- include/linux/usb/hcd.h | 5 +++++ 3 files changed, 51 insertions(+), 4 deletions(-) diff --git a/drivers/usb/core/buffer.c b/drivers/usb/core/buffer.c index fbb087b72..6010ef9f5 100644 --- a/drivers/usb/core/buffer.c +++ b/drivers/usb/core/buffer.c @@ -172,3 +172,44 @@ void hcd_buffer_free( } dma_free_coherent(hcd->self.sysdev, size, addr, dma); } + +void *hcd_buffer_alloc_pages(struct usb_hcd *hcd, size_t size, + gfp_t mem_flags, dma_addr_t *dma) +{ + if (size == 0) + return NULL; + + if (hcd->localmem_pool) + return gen_pool_dma_alloc_align(hcd->localmem_pool, + size, dma, PAGE_SIZE); + + /* some USB hosts just use PIO */ + if (!hcd_uses_dma(hcd)) { + *dma = DMA_MAPPING_ERROR; + return (void *)__get_free_pages(mem_flags, + get_order(size)); + } + + return dma_alloc_coherent(hcd->self.sysdev, + size, dma, mem_flags); +} + +void hcd_buffer_free_pages(struct usb_hcd *hcd, size_t size, + void *addr, dma_addr_t dma) +{ + if (!addr) + return; + + if (hcd->localmem_pool) { + gen_pool_free(hcd->localmem_pool, + (unsigned long)addr, size); + return; + } + + if (!hcd_uses_dma(hcd)) { + free_pages((unsigned long)addr, get_order(size)); + return; + } + + dma_free_coherent(hcd->self.sysdev, size, addr, dma); +} diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c index e501a03d6..b4cf9e860 100644 --- a/drivers/usb/core/devio.c +++ b/drivers/usb/core/devio.c @@ -186,6 +186,7 @@ static int connected(struct usb_dev_state *ps) static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count) { struct usb_dev_state *ps = usbm->ps; + struct usb_hcd *hcd = bus_to_hcd(ps->dev->bus); unsigned long flags; spin_lock_irqsave(&ps->lock, flags); @@ -194,8 +195,8 @@ static void dec_usb_memory_use_count(struct usb_memory *usbm, int *count) list_del(&usbm->memlist); spin_unlock_irqrestore(&ps->lock, flags); - usb_free_coherent(ps->dev, usbm->size, usbm->mem, - usbm->dma_handle); + hcd_buffer_free_pages(hcd, usbm->size, usbm->mem, + usbm->dma_handle); usbfs_decrease_memory_usage( usbm->size + sizeof(struct usb_memory)); kfree(usbm); @@ -247,8 +248,8 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma) goto error_decrease_mem; } - mem = usb_alloc_coherent(ps->dev, size, GFP_USER | __GFP_NOWARN, - &dma_handle); + mem = hcd_buffer_alloc_pages(hcd, size, GFP_USER | __GFP_NOWARN, + &dma_handle); if (!mem) { ret = -ENOMEM; goto error_free_usbm; diff --git a/include/linux/usb/hcd.h b/include/linux/usb/hcd.h index 094c77eaf..79f89109e 100644 --- a/include/linux/usb/hcd.h +++ b/include/linux/usb/hcd.h @@ -501,6 +501,11 @@ void *hcd_buffer_alloc(struct usb_bus *bus, size_t size, void hcd_buffer_free(struct usb_bus *bus, size_t size, void *addr, dma_addr_t dma); +void *hcd_buffer_alloc_pages(struct usb_hcd *hcd, size_t size, + gfp_t mem_flags, dma_addr_t *dma); +void hcd_buffer_free_pages(struct usb_hcd *hcd, size_t size, + void *addr, dma_addr_t dma); + /* generic bus glue, needed for host controllers that don't use PCI */ extern irqreturn_t usb_hcd_irq(int irq, void *__hcd); From patchwork Wed May 10 08:55:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruihan Li X-Patchwork-Id: 681146 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A61BBC7EE22 for ; Wed, 10 May 2023 09:04:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236942AbjEJJER (ORCPT ); Wed, 10 May 2023 05:04:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236932AbjEJJEM (ORCPT ); Wed, 10 May 2023 05:04:12 -0400 X-Greylist: delayed 477 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Wed, 10 May 2023 02:03:54 PDT Received: from pku.edu.cn (mx18.pku.edu.cn [162.105.129.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id B636CE4A; Wed, 10 May 2023 02:03:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pku.edu.cn; s=dkim; h=Received:From:To:Cc:Subject:Date: Message-Id:In-Reply-To:References:MIME-Version: Content-Transfer-Encoding; bh=uc/joagnVnWX0pVL8sA/swhhcGx1a7P3dQ TDyW7FP10=; b=ZmFL1Miz+iwxUAejxmnXZrzMvhWRO2gTDNymFiYKOFmOJx2Phg n86fSt1XcCM1orEYX0LJ8T7Z57qLiHA5O81og7eaOHCblqDcOVLLfy3XPdMK7Mvo aKEwhnHrpWzNksDyzH7/HBiexN+Atn1CCGgm+sIUKlxloB2ZQEE2gI3E8= Received: from localhost.localdomain (unknown [10.7.101.92]) by front01 (Coremail) with SMTP id 5oFpogBnb2cIXFtkW9d5Ag--.63159S4; Wed, 10 May 2023 16:55:42 +0800 (CST) From: Ruihan Li To: linux-mm@kvack.org Cc: linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org, Pasha Tatashin , David Hildenbrand , Matthew Wilcox , Andrew Morton , Christoph Hellwig , Greg Kroah-Hartman , Ruihan Li , stable@vger.kernel.org Subject: [PATCH 2/4] usb: usbfs: Use consistent mmap functions Date: Wed, 10 May 2023 16:55:25 +0800 Message-Id: <20230510085527.57953-3-lrh2000@pku.edu.cn> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230510085527.57953-1-lrh2000@pku.edu.cn> References: <20230510085527.57953-1-lrh2000@pku.edu.cn> MIME-Version: 1.0 X-CM-TRANSID: 5oFpogBnb2cIXFtkW9d5Ag--.63159S4 X-Coremail-Antispam: 1UD129KBjvJXoW7Kw45ZF1kZrWfCw4UCr17KFg_yoW8KFWDpF W8K3yj9F40qFySvr1q9an5WFn5G3s5KFyUWrW2v3sxu3W3Jr1SkFWFya4YqF17tr18Xrna qFWqyw13C3W5uFDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBF1xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s1l1IIY67AE w4v_Jr0_Jr4l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2 IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW0oVCq3wA2z4x0Y4vEx4A2 jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAac4AC62xK8xCEY4 vEwIxC4wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv 7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r 1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02 628vn2kIc2xKxwCY02Avz4vE-syl42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8V WkJr1UJwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E 7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcV C0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF 04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7 CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUQZ23UUUUU= X-CM-SenderInfo: yssqiiarrvmko6sn3hxhgxhubq/1tbiAgEEBVPy770DbwAZsE Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org When hcd->localmem_pool is non-null, it is used to allocate DMA memory. In this case, the dma address will be properly returned (in dma_handle), and dma_mmap_coherent should be used to map this memory into the user space. However, the current implementation uses pfn_remap_range, which is supposed to map normal pages (instead of DMA pages). Instead of repeating the logic in the memory allocation function, this patch introduces a more robust solution. To address the previous issue, this patch checks the type of allocated memory by testing whether dma_handle is properly set. If dma_handle is properly returned, it means some DMA pages are allocated and dma_mmap_coherent should be used to map them. Otherwise, normal pages are allocated and pfn_remap_range should be called. This ensures that the correct mmap functions are used consistently, independently with logic details that determine which type of memory gets allocated. Fixes: a0e710a7def4 ("USB: usbfs: fix mmap dma mismatch") Cc: stable@vger.kernel.org Signed-off-by: Ruihan Li --- drivers/usb/core/devio.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c index b4cf9e860..5067030b7 100644 --- a/drivers/usb/core/devio.c +++ b/drivers/usb/core/devio.c @@ -235,7 +235,7 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma) size_t size = vma->vm_end - vma->vm_start; void *mem; unsigned long flags; - dma_addr_t dma_handle; + dma_addr_t dma_handle = DMA_MAPPING_ERROR; int ret; ret = usbfs_increase_memory_usage(size + sizeof(struct usb_memory)); @@ -265,7 +265,13 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma) usbm->vma_use_count = 1; INIT_LIST_HEAD(&usbm->memlist); - if (hcd->localmem_pool || !hcd_uses_dma(hcd)) { + /* In DMA-unavailable cases, hcd_buffer_alloc_pages allocates + * normal pages and assigns DMA_MAPPING_ERROR to dma_handle. Check + * whether we are in such cases, and then use remap_pfn_range (or + * dma_mmap_coherent) to map normal (or DMA) pages into the user + * space, respectively. + */ + if (dma_handle == DMA_MAPPING_ERROR) { if (remap_pfn_range(vma, vma->vm_start, virt_to_phys(usbm->mem) >> PAGE_SHIFT, size, vma->vm_page_prot) < 0) { From patchwork Wed May 10 08:55:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruihan Li X-Patchwork-Id: 680833 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ED0ACC7EE2E for ; Wed, 10 May 2023 09:04:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236968AbjEJJES (ORCPT ); Wed, 10 May 2023 05:04:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236945AbjEJJEM (ORCPT ); Wed, 10 May 2023 05:04:12 -0400 Received: from pku.edu.cn (mx18.pku.edu.cn [162.105.129.181]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 79EAE2D77; Wed, 10 May 2023 02:03:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pku.edu.cn; s=dkim; h=Received:From:To:Cc:Subject:Date: Message-Id:In-Reply-To:References:MIME-Version: Content-Transfer-Encoding; bh=RB+r4iZrVOmwsYSQRyyhbeOKjs7TDEZNd2 ZsWb5cZtk=; b=L0PRStVHlRMssHOkLlj1FvPWWrdexyLACLIumO/xDcAacGWE/+ X1g6e1sA59PGoi59w9njDcCDKlSPAqDawPJKH26Lv7OGrs1CuTx3NjjL9Y4xKq0L Z2HeVgZmkWKCKy+IAJbU2f6jW6QWMJswqW1u0ytOkPN6uwKg0z8tYYat0= Received: from localhost.localdomain (unknown [10.7.101.92]) by front01 (Coremail) with SMTP id 5oFpogBnb2cIXFtkW9d5Ag--.63159S5; Wed, 10 May 2023 16:55:43 +0800 (CST) From: Ruihan Li To: linux-mm@kvack.org Cc: linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org, Pasha Tatashin , David Hildenbrand , Matthew Wilcox , Andrew Morton , Christoph Hellwig , Greg Kroah-Hartman , Ruihan Li , stable@vger.kernel.org Subject: [PATCH 3/4] mm: page_table_check: Make it dependent on !DEVMEM Date: Wed, 10 May 2023 16:55:26 +0800 Message-Id: <20230510085527.57953-4-lrh2000@pku.edu.cn> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230510085527.57953-1-lrh2000@pku.edu.cn> References: <20230510085527.57953-1-lrh2000@pku.edu.cn> MIME-Version: 1.0 X-CM-TRANSID: 5oFpogBnb2cIXFtkW9d5Ag--.63159S5 X-Coremail-Antispam: 1UD129KBjvJXoWxuF4fZF4xKw4DAF17WF4kZwb_yoW5WrWkpa s2qayS9rW5G34fur1fZws29r1rCrs3GFW3ZrySkF15u3s8CFyvvr4agFy3Z3WUC395Aasx XFWYgryYka18AaDanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUBF1xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s1l1IIY67AE w4v_Jr0_Jr4l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2 IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW0oVCq3wA2z4x0Y4vEx4A2 jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAac4AC62xK8xCEY4 vEwIxC4wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv 7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r 1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02 628vn2kIc2xKxwCY02Avz4vE-syl42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8V WkJr1UJwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E 7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcV C0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF 04k26cxKx2IYs7xG6r1j6r1xMIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7 CjxVAFwI0_Gr0_Gr1UYxBIdaVFxhVjvjDU0xZFpf9x0JUQZ23UUUUU= X-CM-SenderInfo: yssqiiarrvmko6sn3hxhgxhubq/1tbiAgEHBVPy77151QAAse Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The special device /dev/mem enables users to map arbitrary physical memory regions into the user space, which can conflict with the double mapping detection logic used by the page table check. For instance, pages may change their properties (e.g., from anonymous pages to named pages) while they are still being mapped in the user space via /dev/mem, leading to "corruption" detected by the page table check. To address this issue, the PAGE_TABLE_CHECK config option is now dependent on !DEVMM. This ensures that the page table check cannot be enabled when /dev/mem is used. It should be noted that /dev/mem itself is a significant security issue, and its conflict with a hardening technique is understandable. Cc: # 5.17 Signed-off-by: Ruihan Li --- Documentation/mm/page_table_check.rst | 18 ++++++++++++++++++ mm/Kconfig.debug | 2 +- 2 files changed, 19 insertions(+), 1 deletion(-) diff --git a/Documentation/mm/page_table_check.rst b/Documentation/mm/page_table_check.rst index cfd8f4117..b04f29230 100644 --- a/Documentation/mm/page_table_check.rst +++ b/Documentation/mm/page_table_check.rst @@ -52,3 +52,21 @@ Build kernel with: Optionally, build kernel with PAGE_TABLE_CHECK_ENFORCED in order to have page table support without extra kernel parameter. + +Implementation notes +==================== + +We specifically decided not to use VMA information in order to avoid relying on +MM states (except for limited "struct page" info). The page table check is a +separate from Linux-MM state machine that verifies that the user accessible +pages are not falsely shared. + +As a result, special devices that violate the model cannot live with +PAGE_TABLE_CHECK. Currently, /dev/mem is the only known example. Given it +allows users to map arbitrary physical memory regions into the userspace, any +pages may change their properties (e.g., from anonymous pages to named pages) +while they are still being mapped in the userspace via /dev/mem, leading to +"corruption" detected by the page table check. Therefore, the PAGE_TABLE_CHECK +config option is now dependent on !DEVMEM. It's worth noting that /dev/mem +itself is a significant security issue, and its conflict with a hardening +technique is understandable. diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug index a925415b4..37f3d5b20 100644 --- a/mm/Kconfig.debug +++ b/mm/Kconfig.debug @@ -97,7 +97,7 @@ config PAGE_OWNER config PAGE_TABLE_CHECK bool "Check for invalid mappings in user page tables" - depends on ARCH_SUPPORTS_PAGE_TABLE_CHECK + depends on ARCH_SUPPORTS_PAGE_TABLE_CHECK && !DEVMEM select PAGE_EXTENSION help Check that anonymous page is not being mapped twice with read write From patchwork Wed May 10 08:55:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruihan Li X-Patchwork-Id: 680832 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98071C7EE2F for ; Wed, 10 May 2023 09:04:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236938AbjEJJEV (ORCPT ); Wed, 10 May 2023 05:04:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55682 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236946AbjEJJEM (ORCPT ); Wed, 10 May 2023 05:04:12 -0400 Received: from pku.edu.cn (mx19.pku.edu.cn [162.105.129.182]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 78D962D6D; Wed, 10 May 2023 02:03:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pku.edu.cn; s=dkim; h=Received:From:To:Cc:Subject:Date: Message-Id:In-Reply-To:References:MIME-Version: Content-Transfer-Encoding; bh=2eQiKNbmNB0cBb76E1biqVCFyySu9cxeIj xUbq5lhR4=; b=IjabRYa2Ap9uopNiQeD47aPcufM28HR0RHjoKsOhIR12qQ4ZpT FROSfqWOuZffd0CES1/syPG+YdyibjIl6z5gURXgj8a9RLFQ8Nwp6tJTQNQwwnCR RxsCp/BaDPJC9c4eQIgayWUQtumxuwH+la46uhM9AavTn2zsJmeUxgb94= Received: from localhost.localdomain (unknown [10.7.101.92]) by front01 (Coremail) with SMTP id 5oFpogBnb2cIXFtkW9d5Ag--.63159S6; Wed, 10 May 2023 16:55:43 +0800 (CST) From: Ruihan Li To: linux-mm@kvack.org Cc: linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org, Pasha Tatashin , David Hildenbrand , Matthew Wilcox , Andrew Morton , Christoph Hellwig , Greg Kroah-Hartman , Ruihan Li , syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.com, stable@vger.kernel.org Subject: [PATCH 4/4] mm: page_table_check: Ensure user pages are not slab pages Date: Wed, 10 May 2023 16:55:27 +0800 Message-Id: <20230510085527.57953-5-lrh2000@pku.edu.cn> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230510085527.57953-1-lrh2000@pku.edu.cn> References: <20230510085527.57953-1-lrh2000@pku.edu.cn> MIME-Version: 1.0 X-CM-TRANSID: 5oFpogBnb2cIXFtkW9d5Ag--.63159S6 X-Coremail-Antispam: 1UD129KBjvJXoWxurWrCw4UAFyDWF48ZF17GFg_yoW5Wr48pa 95u3W0yrW5Ka43Kw1DZ3ZayryrJa98G3yUC347J3WYv3ZxtFy0vF1jkr9Iy345KrW7Ca45 AFZ8tr1jvrWDX3DanT9S1TB71UUUUUUqnTZGkaVYY2UrUUUUjbIjqfuFe4nvWSU5nxnvy2 9KBjDU0xBIdaVrnRJUUUB21xkIjI8I6I8E6xAIw20EY4v20xvaj40_Wr0E3s1l1IIY67AE w4v_Jr0_Jr4l8cAvFVAK0II2c7xJM28CjxkF64kEwVA0rcxSw2x7M28EF7xvwVC0I7IYx2 IY67AKxVWDJVCq3wA2z4x0Y4vE2Ix0cI8IcVCY1x0267AKxVW0oVCq3wA2z4x0Y4vEx4A2 jsIE14v26rxl6s0DM28EF7xvwVC2z280aVCY1x0267AKxVW0oVCq3wAac4AC62xK8xCEY4 vEwIxC4wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC0VAKzVAqx4xG6I80ewAv 7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr1lOx8S6xCaFVCjc4AY6r 1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IErcIFxwACI402YVCY1x02 628vn2kIc2xKxwCY02Avz4vE-syl42xK82IYc2Ij64vIr41l42xK82IY6x8ErcxFaVAv8V WkJr1UJwCFx2IqxVCFs4IE7xkEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E 7480Y4vE14v26r106r1rMI8E67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcV C0I7IYx2IY67AKxVWUJVWUCwCI42IY6xIIjxv20xvEc7CjxVAFwI0_Cr0_Gr1UMIIF0xvE 42xK8VAvwI8IcIk0rVWUJVWUCwCI42IY6I8E87Iv67AKxVWUJVW8JwCI42IY6I8E87Iv6x kF7I0E14v26r4j6r4UJbIYCTnIWIevJa73UjIFyTuYvjfUOlksUUUUU X-CM-SenderInfo: yssqiiarrvmko6sn3hxhgxhubq/1tbiAgEHBVPy77151gAAsd Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The current uses of PageAnon in page table check functions can lead to type confusion bugs between struct page and slab [1], if slab pages are accidentally mapped into the user space. This is because slab reuses the bits in struct page to store its internal states, which renders PageAnon ineffective on slab pages. Since slab pages are not expected to be mapped into user spaces, this patch adds BUG_ON(PageSlab(page)) checks to ensure that slab pages are not inadvertently mapped. Otherwise, there must be some bugs in the kernel. Reported-by: syzbot+fcf1a817ceb50935ce99@syzkaller.appspotmail.com Closes: https://lore.kernel.org/lkml/000000000000258e5e05fae79fc1@google.com/ [1] Fixes: df4e817b7108 ("mm: page table check") Cc: # 5.17 Signed-off-by: Ruihan Li --- include/linux/page-flags.h | 6 ++++++ mm/page_table_check.c | 6 ++++++ 2 files changed, 12 insertions(+) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 1c68d67b8..7475a5399 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -617,6 +617,12 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted) * Please note that, confusingly, "page_mapping" refers to the inode * address_space which maps the page from disk; whereas "page_mapped" * refers to user virtual address space into which the page is mapped. + * + * For slab pages, since slab reuses the bits in struct page to store its + * internal states, the page->mapping does not exist as such, nor do these + * flags below. So in order to avoid testing non-existent bits, please + * make sure that PageSlab(page) actually evaluates to false before calling + * the following functions (e.g., PageAnon). See slab.h. */ #define PAGE_MAPPING_ANON 0x1 #define PAGE_MAPPING_MOVABLE 0x2 diff --git a/mm/page_table_check.c b/mm/page_table_check.c index 25d8610c0..f2baf97d5 100644 --- a/mm/page_table_check.c +++ b/mm/page_table_check.c @@ -71,6 +71,8 @@ static void page_table_check_clear(struct mm_struct *mm, unsigned long addr, page = pfn_to_page(pfn); page_ext = page_ext_get(page); + + BUG_ON(PageSlab(page)); anon = PageAnon(page); for (i = 0; i < pgcnt; i++) { @@ -107,6 +109,8 @@ static void page_table_check_set(struct mm_struct *mm, unsigned long addr, page = pfn_to_page(pfn); page_ext = page_ext_get(page); + + BUG_ON(PageSlab(page)); anon = PageAnon(page); for (i = 0; i < pgcnt; i++) { @@ -133,6 +137,8 @@ void __page_table_check_zero(struct page *page, unsigned int order) struct page_ext *page_ext; unsigned long i; + BUG_ON(PageSlab(page)); + page_ext = page_ext_get(page); BUG_ON(!page_ext); for (i = 0; i < (1ul << order); i++) {