From patchwork Fri Jun 15 19:43:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 138777 Delivered-To: patch@linaro.org Received: by 2002:a2e:970d:0:0:0:0:0 with SMTP id r13-v6csp1264401lji; Fri, 15 Jun 2018 13:06:05 -0700 (PDT) X-Google-Smtp-Source: ADUXVKKducXwg209auHsjI6SKlJvnUomI+7tU5R8Y4Q2f0fc8NXOF5vdIwivIFZtdeZtLS2sBWDj X-Received: by 2002:a37:e207:: with SMTP id g7-v6mr2664631qki.180.1529093165115; Fri, 15 Jun 2018 13:06:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1529093165; cv=none; d=google.com; s=arc-20160816; b=pvhUI/8h0rU28BLBXLwbXvfqRzqqv7bd/MeHj4ZGfMnOix63REjUdF31ZfRclO7XiY k9UZWXkIDKwNj7TCgUNqdGDdD/xkKcE7nYV8QRobMY78dLW4fDCpzR9FiUAJp4P2ZkcS KnPnkvdmOLED4xmiAO8k5lqp6AS/IZ70lD7HDzVLQ45JtU8nFhLS4y31gawuHcbiHLmw Vi816KwB2JH7HY5IeJDz1qxrYSFw7BqMVoZm3MZsqM/8/eTAYJsAL9jzXrcXcvx9hpZb hxBlBQ4bT8VS741qKKYIGzlvNS3zBsQsXcoPSa0Ge5W6CnYg1ZNusgZyvPcFbrvuyj5c MyHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:cc:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject :content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:to:from:dkim-signature:arc-authentication-results; bh=B2SA4mmIv/f/uj1OaCwYNsFOq5P2Uw7DvOSNDc7St/o=; b=ILs+gEFANmXaV805GUfDA3Ewofzoq8VT3+Oo+aVVP7RFWuSg7VeABA+/7hpqhiKcvO hXZcDwJ6VYyHYOEp2poivaMoHLbKmYJPTk6IVo5bfW+Wkykdoi92w5d8CoSDp9TKv0t4 vXANGFKrGPf5oIEljqVZXkshYX+AaNxUTVQaWXXeO/CNTt89p4vi26/O+wI1iVWK8K6x QtmqCHqtcnRHAqH+OcWRgEh/W09ExHUafXZEJVGiYhaZKL4R4eIAu1OPiZaCToURb9km 89KqVOZcOiTnuY55xVd1e4WkmfYdnm36+1rX1rwLKcQBMZ+8J5WaDsTv+w2SDhKShtYp gm6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=RwF92+uQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [2001:4830:134:3::11]) by mx.google.com with ESMTPS id 13-v6si1548966qts.384.2018.06.15.13.06.04 for (version=TLS1 cipher=AES128-SHA bits=128/128); Fri, 15 Jun 2018 13:06:05 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) client-ip=2001:4830:134:3::11; Authentication-Results: mx.google.com; dkim=fail header.i=@linaro.org header.s=google header.b=RwF92+uQ; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 2001:4830:134:3::11 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:49076 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fTuyy-0000pS-HV for patch@linaro.org; Fri, 15 Jun 2018 16:06:04 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:51644) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fTue4-0000Vl-7z for qemu-devel@nongnu.org; Fri, 15 Jun 2018 15:44:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fTudz-0002ap-Q5 for qemu-devel@nongnu.org; Fri, 15 Jun 2018 15:44:28 -0400 Received: from mail-pf0-x243.google.com ([2607:f8b0:400e:c00::243]:37971) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1fTudz-0002aP-Gk for qemu-devel@nongnu.org; Fri, 15 Jun 2018 15:44:23 -0400 Received: by mail-pf0-x243.google.com with SMTP id b74-v6so5326324pfl.5 for ; Fri, 15 Jun 2018 12:44:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=B2SA4mmIv/f/uj1OaCwYNsFOq5P2Uw7DvOSNDc7St/o=; b=RwF92+uQrtyscbKm/krbAw4wxpS/seATNW4DrRpuQY50ACr9ggKrWxRJBsg4tnppyc QoIxlPKLJ3/QXd5pC2lRJBMuNA7edutTjzJaVGaNff4v+QP0HWIrq9E9bdMZZZiP06my 3WQY3hDevsqYaFpr+0MmWSWhfcg5RgV2ruD80= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=B2SA4mmIv/f/uj1OaCwYNsFOq5P2Uw7DvOSNDc7St/o=; b=GAmx2b4gxitNcUVfYzJSDXeBPkXT+AuhwQY9BLC175qge1zGjGmSoEBN822FNAWOAe pn2yZqLPXGv5lfsK3Ystpyj/QpVa6pCUdlA330zxzHMP+M/CAt4Y8mER62MKyaUb0hhD bXK1Uzcgs4m/piFV2UDXGtc39Q9vW6K9NDYmt+AXSM4vVQgDVdWYADeccxi83tg25z0h T4C+QCEqGPBEn5R5sdQn/3MWW0W2HJj02NqGpmMxUJ40Pq4eRwau0uOxBCPxPo4N4r4S 4LjAeR3eiATmbv5RDK6Kyhuc4c0gYRJEdKfOpmPugc/zRNvvmSm0YdDhS9tKnw5Xn6/y Ov8Q== X-Gm-Message-State: APt69E0yCTtxX5annqeRB8g5tIR/D5UJ60ANb2ZEmMmfRg2HvzsobGPr Q1lGe9+WCpwRnFdKh8OmTd1eSIckHhU= X-Received: by 2002:a63:7b07:: with SMTP id w7-v6mr2713145pgc.199.1529091862279; Fri, 15 Jun 2018 12:44:22 -0700 (PDT) Received: from cloudburst.twiddle.net (rrcs-173-198-77-219.west.biz.rr.com. [173.198.77.219]) by smtp.gmail.com with ESMTPSA id 29-v6sm14038360pfj.14.2018.06.15.12.44.20 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 15 Jun 2018 12:44:21 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Fri, 15 Jun 2018 09:43:47 -1000 Message-Id: <20180615194354.12489-13-richard.henderson@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180615194354.12489-1-richard.henderson@linaro.org> References: <20180615194354.12489-1-richard.henderson@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2607:f8b0:400e:c00::243 Subject: [Qemu-devel] [PULL v2 12/19] translate-all: add page_locked assertions X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, "Emilio G. Cota" Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" From: "Emilio G. Cota" This is only compiled under CONFIG_DEBUG_TCG to avoid bloating the binary. In user-mode, assert_page_locked is equivalent to assert_mmap_lock. Note: There are some tb_lock assertions left that will be removed by later patches. Reviewed-by: Richard Henderson Suggested-by: Alex Bennée Signed-off-by: Emilio G. Cota Signed-off-by: Richard Henderson --- accel/tcg/translate-all.c | 82 +++++++++++++++++++++++++++++++++++++-- 1 file changed, 79 insertions(+), 3 deletions(-) -- 2.17.1 diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 1cc7aab82c..8b378586f4 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -583,6 +583,9 @@ static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, /* In user-mode page locks aren't used; mmap_lock is enough */ #ifdef CONFIG_USER_ONLY + +#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) + static inline void page_lock(PageDesc *pd) { } @@ -605,14 +608,80 @@ void page_collection_unlock(struct page_collection *set) { } #else /* !CONFIG_USER_ONLY */ +#ifdef CONFIG_DEBUG_TCG + +static __thread GHashTable *ht_pages_locked_debug; + +static void ht_pages_locked_debug_init(void) +{ + if (ht_pages_locked_debug) { + return; + } + ht_pages_locked_debug = g_hash_table_new(NULL, NULL); +} + +static bool page_is_locked(const PageDesc *pd) +{ + PageDesc *found; + + ht_pages_locked_debug_init(); + found = g_hash_table_lookup(ht_pages_locked_debug, pd); + return !!found; +} + +static void page_lock__debug(PageDesc *pd) +{ + ht_pages_locked_debug_init(); + g_assert(!page_is_locked(pd)); + g_hash_table_insert(ht_pages_locked_debug, pd, pd); +} + +static void page_unlock__debug(const PageDesc *pd) +{ + bool removed; + + ht_pages_locked_debug_init(); + g_assert(page_is_locked(pd)); + removed = g_hash_table_remove(ht_pages_locked_debug, pd); + g_assert(removed); +} + +static void +do_assert_page_locked(const PageDesc *pd, const char *file, int line) +{ + if (unlikely(!page_is_locked(pd))) { + error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", + pd, file, line); + abort(); + } +} + +#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE__) + +#else /* !CONFIG_DEBUG_TCG */ + +#define assert_page_locked(pd) + +static inline void page_lock__debug(const PageDesc *pd) +{ +} + +static inline void page_unlock__debug(const PageDesc *pd) +{ +} + +#endif /* CONFIG_DEBUG_TCG */ + static inline void page_lock(PageDesc *pd) { + page_lock__debug(pd); qemu_spin_lock(&pd->lock); } static inline void page_unlock(PageDesc *pd) { qemu_spin_unlock(&pd->lock); + page_unlock__debug(pd); } /* lock the page(s) of a TB in the correct acquisition order */ @@ -658,6 +727,7 @@ static bool page_entry_trylock(struct page_entry *pe) if (!busy) { g_assert(!pe->locked); pe->locked = true; + page_lock__debug(pe->pd); } return busy; } @@ -775,6 +845,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) g_tree_foreach(set->tree, page_entry_unlock, NULL); goto retry; } + assert_page_locked(pd); PAGE_FOR_EACH_TB(pd, tb, n) { if (page_trylock_add(set, tb->page_addr[0]) || (tb->page_addr[1] != -1 && @@ -1113,6 +1184,7 @@ static TranslationBlock *tb_alloc(target_ulong pc) /* call with @p->lock held */ static inline void invalidate_page_bitmap(PageDesc *p) { + assert_page_locked(p); #ifdef CONFIG_SOFTMMU g_free(p->code_bitmap); p->code_bitmap = NULL; @@ -1269,6 +1341,7 @@ static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) uintptr_t *pprev; unsigned int n1; + assert_page_locked(pd); pprev = &pd->first_tb; PAGE_FOR_EACH_TB(pd, tb1, n1) { if (tb1 == tb) { @@ -1417,6 +1490,7 @@ static void build_page_bitmap(PageDesc *p) int n, tb_start, tb_end; TranslationBlock *tb; + assert_page_locked(p); p->code_bitmap = bitmap_new(TARGET_PAGE_SIZE); PAGE_FOR_EACH_TB(p, tb, n) { @@ -1450,7 +1524,7 @@ static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, bool page_already_protected; #endif - assert_memory_lock(); + assert_page_locked(p); tb->page_addr[n] = page_addr; tb->page_next[n] = p->first_tb; @@ -1721,8 +1795,7 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, uint32_t current_flags = 0; #endif /* TARGET_HAS_PRECISE_SMC */ - assert_memory_lock(); - assert_tb_locked(); + assert_page_locked(p); #if defined(TARGET_HAS_PRECISE_SMC) if (cpu != NULL) { @@ -1734,6 +1807,7 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, /* XXX: see if in some cases it could be faster to invalidate all the code */ PAGE_FOR_EACH_TB(p, tb, n) { + assert_page_locked(p); /* NOTE: this is subtle as a TB may span two physical pages */ if (n == 0) { /* NOTE: tb_end may be after the end of the page, but @@ -1891,6 +1965,7 @@ void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len) } pages = page_collection_lock(start, start + len); + assert_page_locked(p); if (!p->code_bitmap && ++p->code_write_count >= SMC_BITMAP_USE_THRESHOLD) { build_page_bitmap(p); @@ -1949,6 +2024,7 @@ static bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc) env = cpu->env_ptr; } #endif + assert_page_locked(p); PAGE_FOR_EACH_TB(p, tb, n) { #ifdef TARGET_HAS_PRECISE_SMC if (current_tb == tb &&