From patchwork Sun Nov 16 19:44:21 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 40878 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id A9FE021F5F for ; Sun, 16 Nov 2014 19:45:02 +0000 (UTC) Received: by mail-wi0-f199.google.com with SMTP id bs8sf422682wib.6 for ; Sun, 16 Nov 2014 11:45:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:date :message-id:cc:subject:precedence:list-id:list-unsubscribe :list-archive:list-post:list-help:list-subscribe:errors-to:sender :x-original-sender:x-original-authentication-results:mailing-list; bh=IQiS6qp3+7bUrROPbz3Vwj2Wprit6Vrvsen5PQrvz+w=; b=WgBSqQHYuRWm4q843gtqhvap35Ii+nrZk9okSfnJgK6wcSP/tBN/8NCFxV97sMEPoZ 9AvjiAljiv0DrEVPwlGyo6NKKluWihNslO3/AorHgW19ypPJJ0ivkvHiM+DKqtIdISCa 2qGCnOcNIobLhSSiFDpy8dpH8+ixlWJe2X2128OG/GM1PT/slfaOujbfyXTsmGv84voG H5xIiLGyPvM4Z0k/aJquMVQYZ7HGCsXWrz9VCBSMzSkxJf/n8SzjxTwnj45rAURRIgql bAjyEXsRk98A6nSLdIUupdeCa9slimq50B20v7OMMxLEw3ygJQsuyk/Xo4q8e7Mjw/ST JEvA== X-Gm-Message-State: ALoCoQnfIlqFqNZmN0p2jIQ4jd9xmu7rNJ02kbvOyxu6WlDM/ZZLTeEGT+N9OcJZ5Rqry5fwZlTM X-Received: by 10.195.17.134 with SMTP id ge6mr495144wjd.2.1416167101963; Sun, 16 Nov 2014 11:45:01 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.170.133 with SMTP id am5ls1181178lac.54.gmail; Sun, 16 Nov 2014 11:45:01 -0800 (PST) X-Received: by 10.152.88.8 with SMTP id bc8mr22775053lab.64.1416167101502; Sun, 16 Nov 2014 11:45:01 -0800 (PST) Received: from mail-la0-f44.google.com (mail-la0-f44.google.com. [209.85.215.44]) by mx.google.com with ESMTPS id x8si20095784lbb.86.2014.11.16.11.45.01 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sun, 16 Nov 2014 11:45:01 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) client-ip=209.85.215.44; Received: by mail-la0-f44.google.com with SMTP id hz20so7242874lab.3 for ; Sun, 16 Nov 2014 11:45:01 -0800 (PST) X-Received: by 10.112.202.104 with SMTP id kh8mr22458634lbc.46.1416167101080; Sun, 16 Nov 2014 11:45:01 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp1068927lbc; Sun, 16 Nov 2014 11:45:00 -0800 (PST) X-Received: by 10.224.38.130 with SMTP id b2mr28543443qae.11.1416167099869; Sun, 16 Nov 2014 11:44:59 -0800 (PST) Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id b52si9938079qgb.93.2014.11.16.11.44.59 for (version=TLSv1 cipher=RC4-SHA bits=128/128); Sun, 16 Nov 2014 11:44:59 -0800 (PST) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Received: from localhost ([::1]:44912 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xq5kp-0006RF-1J for patch@linaro.org; Sun, 16 Nov 2014 14:44:59 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53429) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xq5kN-0006AR-2Q for qemu-devel@nongnu.org; Sun, 16 Nov 2014 14:44:32 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Xq5kL-00006p-To for qemu-devel@nongnu.org; Sun, 16 Nov 2014 14:44:31 -0500 Received: from mnementh.archaic.org.uk ([2001:8b0:1d0::1]:54380) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Xq5kL-00006C-Lr for qemu-devel@nongnu.org; Sun, 16 Nov 2014 14:44:29 -0500 Received: from pm215 by mnementh.archaic.org.uk with local (Exim 4.80) (envelope-from ) id 1Xq5kD-0003RQ-Gp; Sun, 16 Nov 2014 19:44:21 +0000 From: Peter Maydell To: qemu-devel@nongnu.org Date: Sun, 16 Nov 2014 19:44:21 +0000 Message-Id: <1416167061-13203-1-git-send-email-peter.maydell@linaro.org> X-Mailer: git-send-email 1.7.10.4 X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2001:8b0:1d0::1 Cc: Paolo Bonzini , Stefan Hajnoczi Subject: [Qemu-devel] [PATCH] exec: Handle multipage ranges in invalidate_and_set_dirty() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: , List-Help: , List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.maydell@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.44 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 The code in invalidate_and_set_dirty() needs to handle addr/length combinations which cross guest physical page boundaries. This can happen, for example, when disk I/O reads large blocks into guest RAM which previously held code that we have cached translations for. Unfortunately we were only checking the clean/dirty status of the first page in the range, and then were calling a tb_invalidate function which only handles ranges that don't cross page boundaries. Fix the function to deal with multipage ranges. The symptoms of this bug were that guest code would misbehave (eg segfault), in particular after a guest reboot but potentially any time the guest reused a page of its physical RAM for new code. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell Reviewed-by: Paolo Bonzini --- This seems pretty nasty, and I have no idea why it hasn't been wreaking more havoc than this before. I'm not entirely sure why we invalidate TBs if any of the dirty bits is set rather than only if the code bit is set, but I left that logic as it is. Review appreciated -- it would be nice to get this into rc2 if we can, I think. exec.c | 6 ++---- include/exec/ram_addr.h | 25 +++++++++++++++++++++++++ 2 files changed, 27 insertions(+), 4 deletions(-) diff --git a/exec.c b/exec.c index 759055d..f0e2bd3 100644 --- a/exec.c +++ b/exec.c @@ -2066,10 +2066,8 @@ int cpu_memory_rw_debug(CPUState *cpu, target_ulong addr, static void invalidate_and_set_dirty(hwaddr addr, hwaddr length) { - if (cpu_physical_memory_is_clean(addr)) { - /* invalidate code */ - tb_invalidate_phys_page_range(addr, addr + length, 0); - /* set dirty bit */ + if (cpu_physical_memory_range_includes_clean(addr, length)) { + tb_invalidate_phys_range(addr, addr + length, 0); cpu_physical_memory_set_dirty_range_nocode(addr, length); } xen_modified_memory(addr, length); diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index cf1d4c7..8fc75cd 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -49,6 +49,21 @@ static inline bool cpu_physical_memory_get_dirty(ram_addr_t start, return next < end; } +static inline bool cpu_physical_memory_get_clean(ram_addr_t start, + ram_addr_t length, + unsigned client) +{ + unsigned long end, page, next; + + assert(client < DIRTY_MEMORY_NUM); + + end = TARGET_PAGE_ALIGN(start + length) >> TARGET_PAGE_BITS; + page = start >> TARGET_PAGE_BITS; + next = find_next_zero_bit(ram_list.dirty_memory[client], end, page); + + return next < end; +} + static inline bool cpu_physical_memory_get_dirty_flag(ram_addr_t addr, unsigned client) { @@ -64,6 +79,16 @@ static inline bool cpu_physical_memory_is_clean(ram_addr_t addr) return !(vga && code && migration); } +static inline bool cpu_physical_memory_range_includes_clean(ram_addr_t start, + ram_addr_t length) +{ + bool vga = cpu_physical_memory_get_clean(start, length, DIRTY_MEMORY_VGA); + bool code = cpu_physical_memory_get_clean(start, length, DIRTY_MEMORY_CODE); + bool migration = + cpu_physical_memory_get_clean(start, length, DIRTY_MEMORY_MIGRATION); + return vga || code || migration; +} + static inline void cpu_physical_memory_set_dirty_flag(ram_addr_t addr, unsigned client) {