From patchwork Mon Mar 3 17:10:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 869871 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ED75724FBE5 for ; Mon, 3 Mar 2025 17:10:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741021834; cv=none; b=rd7Nb1QWhHtmlZAppG9rc2NyYmIjjHZ+l9/pEL3OXDfPr4vyw1JWAWM7rz773wB0l6uDFZPxupAQM1CBzPYYtyuwK3C0MGyqaUDfxReLGnoT72dhEzmyyYj8g04HeqIOzCjTOZO1j9tO8ZwYf2cCLmZG7mx9IDHQBiP3dqN+Z7s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741021834; c=relaxed/simple; bh=RsQwq313gjrcolwwWOK6tAbBz5LwEBOlBhOr6d860Ig=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eMKXshN1NC9+TyZ6U448Y6580HWzkBJ1RHyCXEQ7wvjQqU/S8Yts5R7EUrDp1aqs4jq4PkYk1dw0Hn8HatoXotGxZTuR9bXmR/L+ZfnaPIkJZsQuUjLbfBF3efecl0WyFpOHpwVYyY3o8inSfzZ3EkQ1oJvAsd7UeJGche4gh6Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Mdb6B8rB; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Mdb6B8rB" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4393e89e910so26328045e9.0 for ; Mon, 03 Mar 2025 09:10:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741021830; x=1741626630; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gt/riWGcUGRHcZwRoC2V1jTyAk/j0KNMZMVzsat7B0s=; b=Mdb6B8rBvzysyCoUk6IBIaAhKifD+FVXv70ffyU8cHxSgPuT58LfXdJHejQLaYakDe QC5kJffNMK8vt2MEyhIpRpsAcS4VfXos0mMz61WgHRixzIOmtG0pOQp7vM4tFcVA2Gu9 As+NldCpqyEY96Rv1WE95g2mJ35Qvl+b9/tonveko7+r500s722Cj2v1v2TbN/SVPTOj 2r2ri9lL4HFj4DFZxti2fp/gnlPpzyGtPvWGOOGROhIBPjt1kWRicjhtAKwZukvlPr0U idyRSVqemelYOHZSKJBAxyMGIseIClCOZHRDVnU4hKYntRB055C5Dr39TXpqoSYshBrX sOzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741021830; x=1741626630; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gt/riWGcUGRHcZwRoC2V1jTyAk/j0KNMZMVzsat7B0s=; b=bx9tOCjIzt6P1+pxAkLqJECjFsy0af8wzT2KD1ClthAlICjYZzuZoruVBJCyGLpLof hAykFZDh5qfqzcDzTggzwBIMH6NuqHLp6m/OORdtkvy5I2T+KsPmXuDLAT2LnzI9Ea8K S1LfWPJa9ufzvHqLvFbFmkcnNGul8ECp5HK9dYHqXueyuTxg76XpD8x2/rM4NghHBXcc paO/27MMOrZmbDe5yRRjBV/20PAEv5K5Vv85wlDx0FVe8S/D/GgmLQtiAwOJnT/Q1wIc pxobswFDabEfDBlsx3x1hGKVr+G63HVxRfXeSPXssIGnJ+2uSw/3wO2IdiWtpCZnZt4t s4GA== X-Forwarded-Encrypted: i=1; AJvYcCVH+x5OF6EVJ7u1C2oWRNMojV+VubKBTcyCAZKVqylQ5blelAgTuxbisl4cv7DRcgdMYFXhre475U6dpn8s@vger.kernel.org X-Gm-Message-State: AOJu0YzfleKaB/fxj9S5t9bczrc1bSzJTiuwlkJKmX2SuvUkslAynb2w 3xhNYkVrPifCkkGaJliIpmTR6TY+vH4bYq6XL23sOE6t3WS2SEDIvxAExEGInOj//w5uECjbTA= = X-Google-Smtp-Source: AGHT+IHjUvSjd+E6xvwns/9W7bCh3g1PvhkarhqtlRtOoo2/O/7/4oOGG+YV5YIkmlaYkfIS1yNvw3pbbA== X-Received: from wmbfk7.prod.google.com ([2002:a05:600c:cc7:b0:43b:bfff:e091]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:4023:b0:390:fb2b:62bc with SMTP id ffacd0b85a97d-390fb2b634dmr8086880f8f.23.1741021830280; Mon, 03 Mar 2025 09:10:30 -0800 (PST) Date: Mon, 3 Mar 2025 17:10:11 +0000 In-Reply-To: <20250303171013.3548775-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250303171013.3548775-1-tabba@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250303171013.3548775-8-tabba@google.com> Subject: [PATCH v5 7/9] KVM: arm64: Handle guest_memfd()-backed guest page faults From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, tabba@google.com Add arm64 support for handling guest page faults on guest_memfd backed memslots. For now, the fault granule is restricted to PAGE_SIZE. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 65 +++++++++++++++++++++++++++------------- include/linux/kvm_host.h | 5 ++++ virt/kvm/kvm_main.c | 5 ---- 3 files changed, 50 insertions(+), 25 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 887ffa1f5b14..adb0681fc1c6 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1454,6 +1454,30 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +static kvm_pfn_t faultin_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, bool write_fault, bool *writable, + struct page **page, bool is_private) +{ + kvm_pfn_t pfn; + int ret; + + if (!is_private) + return __kvm_faultin_pfn(slot, gfn, write_fault ? FOLL_WRITE : 0, writable, page); + + *writable = false; + + ret = kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, page, NULL); + if (!ret) { + *writable = !memslot_is_readonly(slot); + return pfn; + } + + if (ret == -EHWPOISON) + return KVM_PFN_ERR_HWPOISON; + + return KVM_PFN_ERR_NOSLOT_MASK; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_s2_trans *nested, struct kvm_memory_slot *memslot, unsigned long hva, @@ -1461,19 +1485,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, { int ret = 0; bool write_fault, writable; - bool exec_fault, mte_allowed; + bool exec_fault, mte_allowed = false; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; - struct vm_area_struct *vma; + struct vm_area_struct *vma = NULL; short vma_shift; void *memcache; - gfn_t gfn; + gfn_t gfn = ipa >> PAGE_SHIFT; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); - bool force_pte = logging_active || is_protected_kvm_enabled(); - long vma_pagesize, fault_granule; + bool is_gmem = kvm_mem_is_private(kvm, gfn); + bool force_pte = logging_active || is_gmem || is_protected_kvm_enabled(); + long vma_pagesize, fault_granule = PAGE_SIZE; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; @@ -1510,16 +1535,22 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return ret; } + mmap_read_lock(current->mm); + /* * Let's check if we will get back a huge page backed by hugetlbfs, or * get block mapping for device MMIO region. */ - mmap_read_lock(current->mm); - vma = vma_lookup(current->mm, hva); - if (unlikely(!vma)) { - kvm_err("Failed to find VMA for hva 0x%lx\n", hva); - mmap_read_unlock(current->mm); - return -EFAULT; + if (!is_gmem) { + vma = vma_lookup(current->mm, hva); + if (unlikely(!vma)) { + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); + mmap_read_unlock(current->mm); + return -EFAULT; + } + + vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; + mte_allowed = kvm_vma_mte_allowed(vma); } if (force_pte) @@ -1590,18 +1621,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ipa &= ~(vma_pagesize - 1); } - gfn = ipa >> PAGE_SHIFT; - mte_allowed = kvm_vma_mte_allowed(vma); - - vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; - /* Don't use the VMA after the unlock -- it may have vanished */ vma = NULL; /* * Read mmu_invalidate_seq so that KVM can detect if the results of - * vma_lookup() or __kvm_faultin_pfn() become stale prior to - * acquiring kvm->mmu_lock. + * vma_lookup() or faultin_pfn() become stale prior to acquiring + * kvm->mmu_lock. * * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs * with the smp_wmb() in kvm_mmu_invalidate_end(). @@ -1609,8 +1635,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, mmu_seq = vcpu->kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); - pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, - &writable, &page); + pfn = faultin_pfn(kvm, memslot, gfn, write_fault, &writable, &page, is_gmem); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 296f1d284d55..88efbbf04db1 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1882,6 +1882,11 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn) return gfn_to_memslot(kvm, gfn)->id; } +static inline bool memslot_is_readonly(const struct kvm_memory_slot *slot) +{ + return slot->flags & KVM_MEM_READONLY; +} + static inline gfn_t hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 38f0f402ea46..3e40acb9f5c0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2624,11 +2624,6 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) return size; } -static bool memslot_is_readonly(const struct kvm_memory_slot *slot) -{ - return slot->flags & KVM_MEM_READONLY; -} - static unsigned long __gfn_to_hva_many(const struct kvm_memory_slot *slot, gfn_t gfn, gfn_t *nr_pages, bool write) {