Message ID | 1554909832-7169-2-git-send-email-suzuki.poulose@arm.com |
---|---|
State | New |
Headers | show |
Series | kvm: arm: Unify stage2 mapping for THP backed memory | expand |
On 2019/4/10 23:23, Suzuki K Poulose wrote: > If we are checking whether the stage2 can map PAGE_SIZE, > we don't have to do the boundary checks as both the host > VMA and the guest memslots are page aligned. Bail the case > easily. > > Cc: Christoffer Dall <christoffer.dall@arm.com> > Cc: Marc Zyngier <marc.zyngier@arm.com> > Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> > --- > virt/kvm/arm/mmu.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c > index a39dcfd..6d73322 100644 > --- a/virt/kvm/arm/mmu.c > +++ b/virt/kvm/arm/mmu.c > @@ -1624,6 +1624,10 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, > hva_t uaddr_start, uaddr_end; > size_t size; > > + /* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */ > + if (map_size == PAGE_SIZE) > + return true; > + > size = memslot->npages * PAGE_SIZE; > > gpa_start = memslot->base_gfn << PAGE_SHIFT; > We can do a comment clean up as well in this patch. s/<< PAGE_SIZE/<< PAGE_SHIFT/ thanks, zenghui
On 04/11/2019 02:48 AM, Zenghui Yu wrote: > > On 2019/4/10 23:23, Suzuki K Poulose wrote: >> If we are checking whether the stage2 can map PAGE_SIZE, >> we don't have to do the boundary checks as both the host >> VMA and the guest memslots are page aligned. Bail the case >> easily. >> >> Cc: Christoffer Dall <christoffer.dall@arm.com> >> Cc: Marc Zyngier <marc.zyngier@arm.com> >> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> >> --- >> virt/kvm/arm/mmu.c | 4 ++++ >> 1 file changed, 4 insertions(+) >> >> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c >> index a39dcfd..6d73322 100644 >> --- a/virt/kvm/arm/mmu.c >> +++ b/virt/kvm/arm/mmu.c >> @@ -1624,6 +1624,10 @@ static bool >> fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, >> hva_t uaddr_start, uaddr_end; >> size_t size; >> + /* The memslot and the VMA are guaranteed to be aligned to >> PAGE_SIZE */ >> + if (map_size == PAGE_SIZE) >> + return true; >> + >> size = memslot->npages * PAGE_SIZE; >> gpa_start = memslot->base_gfn << PAGE_SHIFT; >> > We can do a comment clean up as well in this patch. > > s/<< PAGE_SIZE/<< PAGE_SHIFT/ Sure, I missed that. Will fix it in the next version. Cheers Suzuki
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index a39dcfd..6d73322 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1624,6 +1624,10 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, hva_t uaddr_start, uaddr_end; size_t size; + /* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */ + if (map_size == PAGE_SIZE) + return true; + size = memslot->npages * PAGE_SIZE; gpa_start = memslot->base_gfn << PAGE_SHIFT;
If we are checking whether the stage2 can map PAGE_SIZE, we don't have to do the boundary checks as both the host VMA and the guest memslots are page aligned. Bail the case easily. Cc: Christoffer Dall <christoffer.dall@arm.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com> --- virt/kvm/arm/mmu.c | 4 ++++ 1 file changed, 4 insertions(+) -- 2.7.4