From patchwork Tue Apr 8 18:36:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: mhkelley58@gmail.com X-Patchwork-Id: 879100 Received: from mail-pf1-f175.google.com (mail-pf1-f175.google.com [209.85.210.175]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 523F1223714; Tue, 8 Apr 2025 18:37:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.175 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744137421; cv=none; b=hMJ/4tXAb6CbNjICiVs2BO1/yOnPJdM4YEE5Wig5IlsIuAL0NqezxCmtmJDeMSt3PKDo79w2MO/Zn+ABFrgPTsquwPnzYxeB7/W//9zSTAQStddzUE5u4oE4Pw4O7JcBaIaC1rUD/SJUMvHemxRnjhM7ruOHmpMR5+kxAFN8rqc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744137421; c=relaxed/simple; bh=wD5ahpPvrj9YGUpVfKlQbaf6J2IZrto7N6zjWU6k7ss=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=ACRDlDwQudm29JFzPSQuAy07KwCGEnYW4GKBm7sA2VwLt5c1gcoSKnrLRy0q377ys8oKj9ShExjdFknb/iyAMazhEQnJe4hjmAph+Zpw3EKotLv7lEaj03sWvxPaBebz1lG1gWOjZSCo25fb0iPEIA6XOmWJg8G/mTo0S9b+X8k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ZRmO57m/; arc=none smtp.client-ip=209.85.210.175 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ZRmO57m/" Received: by mail-pf1-f175.google.com with SMTP id d2e1a72fcca58-736c062b1f5so5021462b3a.0; Tue, 08 Apr 2025 11:37:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744137419; x=1744742219; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=sQx5kL2HMBRicxt7p/SiPvwoIXxU9EbXiMpn/Q8TNZ0=; b=ZRmO57m/BSuz4byoJqcSHucL3JgZlhK7IJXsunbom6BELEofhmpV/jxa4C+G60ojAl i1M5h9zIVqPZw8BR4LQMsMUzcDnudw0cHUFEkn17h7JWEz7nsqcIgbggnsNDgryQ7nBi sTAqedYl3DJSJB0B4dP+LMuNb2vKU5bsn6fSoHCvibJM43bcI6wU3buSsIPc6H/z5BPV 1DpryzpFVOUE5bNlPWelfrcTH+BsR7GogsNlHuWlKru3Swu3beJvugvZTvQU6AH2W1/T 5hpe1Hq0QvP05I9dysO9kQJIhkt3IrSg8TwzO/BUIA1L904vLj8PVxYem6dwVKN3vMt1 glAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744137419; x=1744742219; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=sQx5kL2HMBRicxt7p/SiPvwoIXxU9EbXiMpn/Q8TNZ0=; b=Qs0ovEtBn4EHTpWccHV5FIALNCwgUSkwHWFzeeLlv4P8htWarRERji4D6A35xybc9k hgY/DQoejnEa2s7u9g4UaSJ72mMHUB9eKndwSx9NHg/oo3vAsUnP7fwYo+kSvvPhBVQB uOryOBytmvNKRPv3EXiOO4b96XdrejYIg7pA8y+YJHEl4WElJKOvDcsMJjogs3OYRLX1 PvvBHYWk/y+Jn4e6XJTFa/pvleRScbI4MO3dK5nu7dQ2uGLzqM/dmJ4Q1+Xa6GpBFnp/ 2D2NIc7xuM7sHLX0i6FhQlFjY5bwwsaZOw97J8+/zNZDKWNCwX+ddnE1IxubelolrTXO 9dOg== X-Forwarded-Encrypted: i=1; AJvYcCV760YrF/whxtZq9l+8fbYi28aFMZZonpwgeJB9P6M91eq3Y9/lS6G02lyNebELr6e+3un9HeUo9ofbEKCi@vger.kernel.org, AJvYcCVi6Es4uXa2nsOcj4flpt4PwPTWkSZ7HJxztbzsFMUSFNRaL1Hgx1wKc7syhQ9G+Wbg8+tR4+Lr/o7CxQ==@vger.kernel.org, AJvYcCWdEhFabwM5NwZulT/CyX3PMimstQ6Vy6tJnJbpn7Fr0DRFLAAjhFD+qLjfetnLJZWytWtYzFvmCGM0ZIOZ@vger.kernel.org X-Gm-Message-State: AOJu0YwnMV9nec0mnbFi1Ub6M7/kr1MJtDv3EBd5TnLCZO2sff4xj9Ow P2OMbxFWQMg11DWrje6WZT3HBixjQl/vph0b1VtfwQJtNNKgY6qw X-Gm-Gg: ASbGnctIRS6KJGLvVgszL7RaVP+1QG5FPQNE++Nk6svME5CKOFvGfLC/ciYQjoJ7iZ6 YtH1p9sPA41mY9tqR2xB9BhlV8iYtNOsk8FT3elxFHr/tY7GJTyp5MMqFDsOVz2edxQfrZjyjrV MIz8fpPejRsg8BLM8KgaZV2aLHiAIvEzo1w84+GjXh3t+8C4WLzXUie2U/QsiuCW5ry3biUsB9S pUTzoS0ehPz1sU/O/Jc0qwfp2dT+ly406BCsbrLczQnrug0Oj2fhRFQCsXLqxNqT2jB+1GCmM0Y xmspp78pM/WJIsYYReY5IzBr0Di2LPSSeB+Fsrw82foLCho+X97S2RjaQrVDU/OYkZg6QhCgGmo Iz6olZm8PnRcKWNIibSNu/+4= X-Google-Smtp-Source: AGHT+IEjwArdNPWWYuDFcDmnKKWKrgBr6kLPXpKsC7dfbpRyYKqI8aTmyy8lbfzaFKrF/1/S/MR/dw== X-Received: by 2002:a05:6a00:1152:b0:736:ab21:6f37 with SMTP id d2e1a72fcca58-73bae30912bmr152104b3a.0.1744137419397; Tue, 08 Apr 2025 11:36:59 -0700 (PDT) Received: from localhost.localdomain (c-67-160-120-253.hsd1.wa.comcast.net. [67.160.120.253]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97d32b2sm10960469b3a.5.2025.04.08.11.36.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Apr 2025 11:36:59 -0700 (PDT) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: jayalk@intworks.biz, simona@ffwll.ch, deller@gmx.de, haiyangz@microsoft.com, kys@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, akpm@linux-foundation.org Cc: weh@microsoft.com, tzimmermann@suse.de, hch@lst.de, dri-devel@lists.freedesktop.org, linux-fbdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/3] mm: Export vmf_insert_mixed_mkwrite() Date: Tue, 8 Apr 2025 11:36:44 -0700 Message-Id: <20250408183646.1410-2-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250408183646.1410-1-mhklinux@outlook.com> References: <20250408183646.1410-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com Precedence: bulk X-Mailing-List: linux-fbdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Michael Kelley Export vmf_insert_mixed_mkwrite() for use by fbdev deferred I/O code, which can be built as a module. For consistency with the related function vmf_insert_mixed(), export without the GPL qualifier. Commit cd1e0dac3a3e ("mm: unexport vmf_insert_mixed_mkwrite") is effectively reverted. Signed-off-by: Michael Kelley --- mm/memory.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/memory.c b/mm/memory.c index 9d0ba6fe73c1..883ad53d077e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2660,6 +2660,7 @@ vm_fault_t vmf_insert_mixed_mkwrite(struct vm_area_struct *vma, { return __vm_insert_mixed(vma, addr, pfn, true); } +EXPORT_SYMBOL(vmf_insert_mixed_mkwrite); /* * maps a range of physical memory into the requested pages. the old From patchwork Tue Apr 8 18:36:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: mhkelley58@gmail.com X-Patchwork-Id: 879833 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7749023E35D; Tue, 8 Apr 2025 18:37:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744137423; cv=none; b=aW2L1i5Eso4CXf30D5MPeR1XFNd9DBOUyu0B5X9FPtYJfY+WI7OWtleAAkzMdFBCWUcZeJQjpOGK3FR6MLdnBCk/IPeSo7QaO0yNyKDa5oXVQt+0sjIdcuYJ9qVP4uui53DFG2HN6j/D9afwenpr7keCak5WH4OnedGBx+J9Szw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744137423; c=relaxed/simple; bh=ez2ocKNXIhwg7rKrJqs21QJI1qA4GysMfQVZzXt2/xU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=H1erM2cArXY5MaewfpW/U/2MJ8Z19Ysgi1K4Z1ClkXT0CFbrhqr5/oYiqnR39ehWR3OC3iUkNJjpvf+yGe0s5QG9HuRtMFwOdEH45SPznIvmTbGV3473nUd9mzbjEWfugTwR8FMRbYbzVX7+d4F6GZf75O7OVuKlMmO7hrPo2Ls= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=IvCvmddb; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="IvCvmddb" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-730517040a9so7204210b3a.0; Tue, 08 Apr 2025 11:37:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744137421; x=1744742221; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=yaU+5633ewJo+47cjHg1WFMmmsE9Nb6gzhZCIjFmnWA=; b=IvCvmddbbsaj9QUOIRwVkK7W2iTVfix2negAlrbrcPROPWWNzUIsdZg9QXtf/NDIkh 8T6Fts5DwN5EmSHEe5zYGRmuEeuM4TO3S4gMHeVtbDffn321xU2Wc7cmYAVTnxrR9yQL GEqoZHWIOTIw8OBvOU+2I2Xo8sDR6o/NLP7biHyhc0kf3vH36hF1zQr0gX4PWisoxGvI rsKNpnwFESLGi5kqNvguVVOwLwgg8ffuoCiCV1YBK/H3OVvCFhMFbQI/u9YUZmpUdpke Da8NSvzGviNoI5ZuBYtl6082N8ORfZ2sZ1y4M+9AyvfJHEGdy/JS2nkYjb6UTTu31NUW eMjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744137421; x=1744742221; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=yaU+5633ewJo+47cjHg1WFMmmsE9Nb6gzhZCIjFmnWA=; b=qlNYgmbXLkwqT6xJT9SR9iENrc3M85mMAZhTZBLhBUEJT9p08ZyDg25Yt9piUD9OdZ sDhUDx274zG0S8Erbfv7HzEepAxmwA4i9PpUqlY8VWtiMYnTSiVirFemIgKm2n+BsbOv 6O8hQe/tltIUJh3m70ClDVYFr6je5jORgNfEGFYefdP6zKYfv3Pcy4ZoTjennVjn/Qdu UuOqhkhRt6DRl09QxsR+DkTGlcs11kMNlPLEweY3A919bFc7yVDbxRAg7gDX38iNIX2o Ao3plxBEPlUAyBHrw5RCZCfIvwwSuXAskhEGElsH1rCEZNcwJ4KBBVq4oCQbRyxPvba9 cUaw== X-Forwarded-Encrypted: i=1; AJvYcCV0+lyCJOKjJM8x8QT2iHZ3Wqx8EVq5taveehCfX84YGL3LAACHpqb8UzTvMrNZ1R8R7hZurxpqcjfdJoip@vger.kernel.org, AJvYcCV6rXf4fedi1jNjPkLjbLwes43qTiwnVWvBtjC99jZtBHGVwCdnyJRbbArB6hTDc+X2LO/csUbr+YpTHA==@vger.kernel.org, AJvYcCVUuEZpDLk6MBnK/jEDJE9GGAT3nbSn0D/lgMskRGJmsoHORPyd9GhuyRd6uYE/OAo0IG5uIX0NdBwmOYEw@vger.kernel.org X-Gm-Message-State: AOJu0YxPwhejS14K5KGFckeP4kyLsYgyvGbSS/npgbZGop1Fn7KSRdAz dNJMy7IfH9qIZOv5lRQAw/QWDTjp602jLiVpEwRhmsH/IwAcArgo X-Gm-Gg: ASbGncvx1jxCwiCBakB+awkz7cijPz5UMYeA4+htqperi4TAKVQ97cgnc6e/Q5Pn/rB oSg2aFiF2Hk18sQjplTKxvlQPWD+U0Gb+YGxPWKzLqJlkEtJoo6edXGZNLEAjY+BmPE9u5XmJlw r7Y7Nd+0JYw9B+k4XVR8cWTZFSHo/lfJoYtqYbJISkNm9rZST3Z95dIaFuvIsaI6vNj0pXf+OLv 9W6/Flk562f0fWxTAuRatDiNaE21Ah1Vca6pWl9xKSw0VmdUVGGtUOnEkSJ4wV3QUNn4PkmsScI 48+AnapeWFNMZ4T4cKW0UUq3Cx68iKTvarAei+BpHGExC7W9KzV4C1G52hCntAAooI7Tk+WtDTy sRLAq7l7hL8mFTQkYQBGc+xE= X-Google-Smtp-Source: AGHT+IHVHsepbXorylQC9VxOWgtbUH6lFwU2otdZFrXXy0fXuJkqEdBn0S7PNfOtArRZ51249Vrk0Q== X-Received: by 2002:a05:6a00:114f:b0:736:2ff4:f255 with SMTP id d2e1a72fcca58-73bae527668mr78501b3a.15.1744137420544; Tue, 08 Apr 2025 11:37:00 -0700 (PDT) Received: from localhost.localdomain (c-67-160-120-253.hsd1.wa.comcast.net. [67.160.120.253]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97d32b2sm10960469b3a.5.2025.04.08.11.36.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Apr 2025 11:37:00 -0700 (PDT) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: jayalk@intworks.biz, simona@ffwll.ch, deller@gmx.de, haiyangz@microsoft.com, kys@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, akpm@linux-foundation.org Cc: weh@microsoft.com, tzimmermann@suse.de, hch@lst.de, dri-devel@lists.freedesktop.org, linux-fbdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/3] fbdev/deferred-io: Support contiguous kernel memory framebuffers Date: Tue, 8 Apr 2025 11:36:45 -0700 Message-Id: <20250408183646.1410-3-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250408183646.1410-1-mhklinux@outlook.com> References: <20250408183646.1410-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com Precedence: bulk X-Mailing-List: linux-fbdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Michael Kelley Current defio code works only for framebuffer memory that is allocated with vmalloc(). The code assumes that the underlying page refcount can be used by the mm subsystem to manage each framebuffer page's lifecycle, including freeing the page if the refcount goes to 0. This approach is consistent with vmalloc'ed memory, but not with contiguous kernel memory allocated via alloc_pages() or similar. The latter such memory pages usually have a refcount of 0 when allocated, and would be incorrectly freed page-by-page if used with defio. That free'ing corrupts the memory free lists and Linux eventually panics. Simply bumping the refcount after allocation doesn’t work because when the framebuffer memory is freed, __free_pages() complains about non-zero refcounts. Commit 37b4837959cb ("video: deferred io with physically contiguous memory") from the year 2008 purported to add support for contiguous kernel memory framebuffers. The motivating device, sh_mobile_lcdcfb, uses dma_alloc_coherent() to allocate framebuffer memory, which is likely to use alloc_pages(). It's unclear to me how this commit actually worked at the time, unless dma_alloc_coherent() was pulling from a CMA pool instead of alloc_pages(). Or perhaps alloc_pages() worked differently or on the arm32 architecture on which sh_mobile_lcdcfb is used. In any case, for x86 and arm64 today, commit 37b4837959cb9 is not sufficient to support contiguous kernel memory framebuffers. The problem can be seen with the hyperv_fb driver, which may allocate the framebuffer memory using vmalloc() or alloc_pages(), depending on the configuration of the Hyper-V guest VM (Gen 1 vs. Gen 2) and the size of the framebuffer. Fix this limitation by adding defio support for contiguous kernel memory framebuffers. A driver with a framebuffer allocated from contiguous kernel memory must set the FBINFO_KMEMFB flag to indicate such. Tested with the hyperv_fb driver in both configurations -- with a vmalloc() framebuffer and with an alloc_pages() framebuffer on x86. Also verified a vmalloc() framebuffer on arm64. Hardware is not available to me to verify that the older arm32 devices still work correctly, but the path for vmalloc() framebuffers is essentially unchanged. Even with these changes, defio does not support framebuffers in MMIO space, as defio code depends on framebuffer memory pages having corresponding 'struct page's. Fixes: 3a6fb6c4255c ("video: hyperv: hyperv_fb: Use physical memory for fb on HyperV Gen 1 VMs.") Signed-off-by: Michael Kelley --- drivers/video/fbdev/core/fb_defio.c | 126 +++++++++++++++++++++++----- include/linux/fb.h | 1 + 2 files changed, 107 insertions(+), 20 deletions(-) diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c index 4fc93f253e06..0879973a4572 100644 --- a/drivers/video/fbdev/core/fb_defio.c +++ b/drivers/video/fbdev/core/fb_defio.c @@ -8,11 +8,38 @@ * for more details. */ +/* + * Deferred I/O ("defio") allows framebuffers that are mmap()'ed to user space + * to batch user space writes into periodic updates to the underlying + * framebuffer hardware or other implementation (such as with a virtualized + * framebuffer in a VM). At each batch interval, a callback is invoked in the + * framebuffer's kernel driver, and the callback is supplied with a list of + * pages that have been modified in the preceding interval. The callback can + * use this information to update the framebuffer hardware as necessary. The + * batching can improve performance and reduce the overhead of updating the + * hardware. + * + * Defio is supported on framebuffers allocated using vmalloc() and allocated + * as contiguous kernel memory using alloc_pages(), kmalloc(), or + * dma_alloc_coherent(), the latter of which might allocate from CMA. These + * memory allocations all have corresponding "struct page"s. Framebuffers + * in MMIO space are *not* supported because MMIO space does not have + * corrresponding "struct page"s. + * + * For framebuffers allocated using vmalloc(), struct fb_info must have + * "screen_buffer" set to the vmalloc address of the framebuffer. For + * framebuffers allocated from contiguous kernel memory, FBINFO_KMEMFB must + * be set, and "fix.smem_start" must be set to the physical address of the + * frame buffer. In both cases, "fix.smem_len" must be set to the framebuffer + * size in bytes. + */ + #include #include #include #include #include +#include #include #include #include @@ -37,7 +64,7 @@ static struct page *fb_deferred_io_get_page(struct fb_info *info, unsigned long else if (info->fix.smem_start) page = pfn_to_page((info->fix.smem_start + offs) >> PAGE_SHIFT); - if (page) + if (page && !(info->flags & FBINFO_KMEMFB)) get_page(page); return page; @@ -137,6 +164,15 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf) BUG_ON(!info->fbdefio->mapping); + if (info->flags & FBINFO_KMEMFB) + /* + * In this path, the VMA is marked VM_PFNMAP, so mm assumes + * there is no struct page associated with the page. The + * PFN must be directly inserted and the created PTE will be + * marked "special". + */ + return vmf_insert_pfn(vmf->vma, vmf->address, page_to_pfn(page)); + vmf->page = page; return 0; } @@ -163,13 +199,14 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_fsync); /* * Adds a page to the dirty list. Call this from struct - * vm_operations_struct.page_mkwrite. + * vm_operations_struct.page_mkwrite or .pfn_mkwrite. */ -static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long offset, +static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, struct vm_fault *vmf, struct page *page) { struct fb_deferred_io *fbdefio = info->fbdefio; struct fb_deferred_io_pageref *pageref; + unsigned long offset = vmf->pgoff << PAGE_SHIFT; vm_fault_t ret; /* protect against the workqueue changing the page list */ @@ -182,20 +219,34 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long } /* - * We want the page to remain locked from ->page_mkwrite until - * the PTE is marked dirty to avoid mapping_wrprotect_range() - * being called before the PTE is updated, which would leave - * the page ignored by defio. - * Do this by locking the page here and informing the caller - * about it with VM_FAULT_LOCKED. + * The PTE must be marked writable before the defio deferred work runs + * again and potentially marks the PTE write-protected. If the order + * should be switched, the PTE would become writable without defio + * tracking the page, leaving the page forever ignored by defio. + * + * For vmalloc() framebuffers, the associated struct page is locked + * before releasing the defio lock. mm will later mark the PTE writaable + * and release the struct page lock. The struct page lock prevents + * the page from being prematurely being marked write-protected. + * + * For FBINFO_KMEMFB framebuffers, mm assumes there is no struct page, + * so the PTE must be marked writable while the defio lock is held. */ - lock_page(pageref->page); + if (info->flags & FBINFO_KMEMFB) { + unsigned long pfn = page_to_pfn(pageref->page); + + ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, + __pfn_to_pfn_t(pfn, PFN_SPECIAL)); + } else { + lock_page(pageref->page); + ret = VM_FAULT_LOCKED; + } mutex_unlock(&fbdefio->lock); /* come back after delay to process the deferred IO */ schedule_delayed_work(&info->deferred_work, fbdefio->delay); - return VM_FAULT_LOCKED; + return ret; err_mutex_unlock: mutex_unlock(&fbdefio->lock); @@ -207,10 +258,10 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long * @fb_info: The fbdev info structure * @vmf: The VM fault * - * This is a callback we get when userspace first tries to - * write to the page. We schedule a workqueue. That workqueue - * will eventually mkclean the touched pages and execute the - * deferred framebuffer IO. Then if userspace touches a page + * This is a callback we get when userspace first tries to write to a + * page. We schedule a workqueue. That workqueue will eventually do + * mapping_wrprotect_range() on the written pages and execute the + * deferred framebuffer IO. Then if userspace writes to a page * again, we repeat the same scheme. * * Returns: @@ -218,12 +269,11 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long */ static vm_fault_t fb_deferred_io_page_mkwrite(struct fb_info *info, struct vm_fault *vmf) { - unsigned long offset = vmf->pgoff << PAGE_SHIFT; struct page *page = vmf->page; file_update_time(vmf->vma->vm_file); - return fb_deferred_io_track_page(info, offset, page); + return fb_deferred_io_track_page(info, vmf, page); } /* vm_ops->page_mkwrite handler */ @@ -234,9 +284,25 @@ static vm_fault_t fb_deferred_io_mkwrite(struct vm_fault *vmf) return fb_deferred_io_page_mkwrite(info, vmf); } +/* + * Similar to fb_deferred_io_mkwrite(), but for first writes to pages + * in VMAs that have VM_PFNMAP set. + */ +static vm_fault_t fb_deferred_io_pfn_mkwrite(struct vm_fault *vmf) +{ + struct fb_info *info = vmf->vma->vm_private_data; + unsigned long offset = vmf->pgoff << PAGE_SHIFT; + struct page *page = phys_to_page(info->fix.smem_start + offset); + + file_update_time(vmf->vma->vm_file); + + return fb_deferred_io_track_page(info, vmf, page); +} + static const struct vm_operations_struct fb_deferred_io_vm_ops = { .fault = fb_deferred_io_fault, .page_mkwrite = fb_deferred_io_mkwrite, + .pfn_mkwrite = fb_deferred_io_pfn_mkwrite, }; static const struct address_space_operations fb_deferred_io_aops = { @@ -246,11 +312,31 @@ static const struct address_space_operations fb_deferred_io_aops = { int fb_deferred_io_mmap(struct fb_info *info, struct vm_area_struct *vma) { vma->vm_page_prot = pgprot_decrypted(vma->vm_page_prot); + vm_flags_t flags = VM_DONTEXPAND | VM_DONTDUMP; vma->vm_ops = &fb_deferred_io_vm_ops; - vm_flags_set(vma, VM_DONTEXPAND | VM_DONTDUMP); - if (!(info->flags & FBINFO_VIRTFB)) - vm_flags_set(vma, VM_IO); + if (info->flags & FBINFO_KMEMFB) { + /* + * I/O fault path calls vmf_insert_pfn(), which bug checks + * if the vma is not marked shared. mmap'ing the framebuffer + * as PRIVATE doesn't really make sense anyway, though doing + * so isn't harmful for vmalloc() framebuffers. So there's + * no prohibition for that case. + */ + if (!(vma->vm_flags & VM_SHARED)) + return -EINVAL; + /* + * Set VM_PFNMAP so mm code will not try to manage the pages' + * lifecycles. We don't want individual pages to be freed + * based on refcount. Instead the memory must be returned to + * the free pool in the usual way. Cf. the implementation of + * remap_pfn_range() and remap_pfn_range_internal(). + */ + flags |= VM_PFNMAP | VM_IO; + } else if (!(info->flags & FBINFO_VIRTFB)) { + flags |= VM_IO; + } + vm_flags_set(vma, flags); vma->vm_private_data = info; return 0; } diff --git a/include/linux/fb.h b/include/linux/fb.h index cd653862ab99..ea2092757a18 100644 --- a/include/linux/fb.h +++ b/include/linux/fb.h @@ -402,6 +402,7 @@ struct fb_tile_ops { /* hints */ #define FBINFO_VIRTFB 0x0004 /* FB is System RAM, not device. */ +#define FBINFO_KMEMFB 0x0008 /* FB is allocated in contig kernel mem */ #define FBINFO_PARTIAL_PAN_OK 0x0040 /* otw use pan only for double-buffering */ #define FBINFO_READS_FAST 0x0080 /* soft-copy faster than rendering */ From patchwork Tue Apr 8 18:36:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: mhkelley58@gmail.com X-Patchwork-Id: 879099 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A88423F41A; Tue, 8 Apr 2025 18:37:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744137423; cv=none; b=mGfVyIE7MnVR3/wrb1qpS0ey2tW/xIKz3HEK+s3wJDyKG14vDdbx4DUUFzL5CYL1CdmZk6JouxQt4UehR4lPgOgojWSb4d4D117W0Dzpcuy7WoltmkNeAbOu92DHMw/bpIKM4aQK9Tqbz30LKRXSt2qulBj54VpVMbAn1YpVaGA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744137423; c=relaxed/simple; bh=UqNvfT8UNcQF1ajewAq/dch5PAUIobvrkdEpLmAZ7wA=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=sfHV7JRRZ0KHs4C8QNoDpOTq2MYEs2w97Jcqrbs7oRKmbsoUVuTJlNppZglDBTGN88YSU+MVgGHMPshaHhX3ODGki3ZhR6LQT6azP7fwDdWx5PpvF+/8IGrI3nfYuYDM8qNL2Ek4ksL6qHV2ccyROny5Zq5CV/wa6QjxwRWzKwY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=UcycZ41T; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="UcycZ41T" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-736ab1c43c4so5768724b3a.1; Tue, 08 Apr 2025 11:37:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744137422; x=1744742222; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=GNk2zZpC65xRherJZoi8fazJtKf8aShkVrAnPTjrn2Y=; b=UcycZ41TStHYwMBAJKANR2K4DeRBT5cC+aqEOwXFeCrQDf1xlkYInVRPiwOrgrOgKq FAVZerV6gVNRlfMzfWWj3UWlAoyzDwvyUg9XNmN1fUOG0W0z1un7+uarrD52CwlkHuAx 3qMpP8j1+ucFwwc/7sK2a7OOX5aF2ClEZHers+zxvPo7HBfntrQW70GzG8nWk02sHZpz 9cx7yB4Fs8W/zuD4wNBgFGwnFUMmbckU2WBQLN7EAFHi1TMFL0rZ4FXsi6s5ZMzFsZeu PuLqmcMO1cFehxvV//aQYrEq9iOYZUxnOs6rLrpBjpp6E36uznh1z5EobuelDrsl7ruf xDnw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744137422; x=1744742222; h=content-transfer-encoding:mime-version:reply-to:references :in-reply-to:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=GNk2zZpC65xRherJZoi8fazJtKf8aShkVrAnPTjrn2Y=; b=u+iJJ3dqogzg1ZxyzafqSmmpfPYOJ/St8KykhzNHFsZKqexRuMN2OZMcpfs2Dp1Fuq nNkNkB9huBpO3CC7Da5AoWaGCyShvcQ2FLDNPE3e2qr7MaDRL+kQyFL7EH/ZNfGxeIQa i/g09rYAR2zDeaq5fTFxuatq2XQUOP/+BJj1x4h5B4K6CGdB17UZ5Py7HGBIQ4pzarDo aRg0CqHDTybnJ2sQJkITYhSV37dsf+hMoztWI5ubyp4BzUGGoeGtL6OMsu85JYvxJRDb CXo853bF1o8QU5kXYK4Jnra/KB4C6jau8C6Jbd/czDvO0l5rotcZ+tZMklJnIJxM7JSP 0zzQ== X-Forwarded-Encrypted: i=1; AJvYcCUV7CY3n38wTJZZLKN3s8hWyQfvuGK0PDfMpv6vNF5BXDAkeIz6XZxSKrWadiJj5oc8US7rkW2qMC33Uw==@vger.kernel.org, AJvYcCUdFOgfeTuB4IFtJflQjt92+T35vsuwgeJsMDSm6cOe8t79S5BkiGDRI9U1MCIhRZDNk8FvfkrXwMC46dSE@vger.kernel.org, AJvYcCUmDJZGWTTAzotJZjAGE92MNSGJDWBy5fH6kd+sI8lI50baCYYYw6rD2cUXOdzDWJLCXvX0MD6+KL893+UJ@vger.kernel.org X-Gm-Message-State: AOJu0YxiS/RVX+A2f3YF8CE13Y/h+20vPK0G+g4hoy8avASWx9fes/Ah mPSCS6Ma2fxdTi9CwXew+VGtIQd1rDb9CBGIo2qDx1CWoBnGieo+ X-Gm-Gg: ASbGncsAnPwb0qjPtVrevJtP21YIviflJIO3i2KfcL7X6ia51waOcMHx0L7Vb4cJ4a6 efkHWdh8jtO5fjpI5N9bwh+80IO6HYmOMpml2SeUjbzS8mclhxsSsYIf7Zzb2z1s2VE7+3eywLR Lxsds1/fzEj26w9w7uxy7K7fAd/omLxTEarUTCB2buI6Zfme+cXc9dtdfp7aGB/FVSNKM6azRpp qv/vjGCP2bwQvMHb8MmX1yS4ARnxXyTpWNQkQcoedEWzkPREaqLyCj1C+4tN8pGAyfXINoD0f5+ M+UoXyQ/knkeszAbC15SdB0uzEptWNz39A2Za3p9BIBU84bD+k+YfWs5zOQ9eRjAnHihKixw2Hi 9lFjX8s+bgGu1W5ZpU2esmLg= X-Google-Smtp-Source: AGHT+IFoxxArLlbdFj00rZFAC5VSPOC5ZnT2KxxJEAFIcKSD0E+nUXdTHUx2/AnLXQqCMh8+l9gjPg== X-Received: by 2002:a05:6a00:2408:b0:737:e73:f64b with SMTP id d2e1a72fcca58-73bae497469mr99517b3a.1.1744137421614; Tue, 08 Apr 2025 11:37:01 -0700 (PDT) Received: from localhost.localdomain (c-67-160-120-253.hsd1.wa.comcast.net. [67.160.120.253]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97d32b2sm10960469b3a.5.2025.04.08.11.37.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 08 Apr 2025 11:37:01 -0700 (PDT) From: mhkelley58@gmail.com X-Google-Original-From: mhklinux@outlook.com To: jayalk@intworks.biz, simona@ffwll.ch, deller@gmx.de, haiyangz@microsoft.com, kys@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, akpm@linux-foundation.org Cc: weh@microsoft.com, tzimmermann@suse.de, hch@lst.de, dri-devel@lists.freedesktop.org, linux-fbdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/3] fbdev: hyperv_fb: Fix mmap of framebuffers allocated using alloc_pages() Date: Tue, 8 Apr 2025 11:36:46 -0700 Message-Id: <20250408183646.1410-4-mhklinux@outlook.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250408183646.1410-1-mhklinux@outlook.com> References: <20250408183646.1410-1-mhklinux@outlook.com> Reply-To: mhklinux@outlook.com Precedence: bulk X-Mailing-List: linux-fbdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Michael Kelley Framebuffer memory allocated using alloc_pages() was added to hyperv_fb in commit 3a6fb6c4255c ("video: hyperv: hyperv_fb: Use physical memory for fb on HyperV Gen 1 VMs.") in kernel version 5.6. But mmap'ing such framebuffers into user space has never worked due to limitations in the kind of memory that fbdev deferred I/O works with. As a result of the limitation, hyperv_fb's usage results in memory free lists becoming corrupt and Linux eventually panics. With support for framebuffers allocated using alloc_pages() recently added to fbdev deferred I/O, fix the problem by setting the flag telling fbdev deferred I/O to use the new support. Fixes: 3a6fb6c4255c ("video: hyperv: hyperv_fb: Use physical memory for fb on HyperV Gen 1 VMs.") Signed-off-by: Michael Kelley --- drivers/video/fbdev/hyperv_fb.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c index 75338ffc703f..1698221f857e 100644 --- a/drivers/video/fbdev/hyperv_fb.c +++ b/drivers/video/fbdev/hyperv_fb.c @@ -1020,6 +1020,7 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info) info->fix.smem_len = screen_fb_size; info->screen_base = par->mmio_vp; info->screen_size = screen_fb_size; + info->flags |= FBINFO_KMEMFB; par->need_docopy = false; goto getmem_done;