From patchwork Thu Oct 6 03:10:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612846 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1191092pvb; Wed, 5 Oct 2022 20:15:36 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5XYidUAFjTfHT0BAdFjciNAgtIcSy8uR+7VnvDaRmWW8U4ejUMsUJeKYLOyFj91UgJRDhO X-Received: by 2002:a05:620a:201d:b0:6ce:b005:6113 with SMTP id c29-20020a05620a201d00b006ceb0056113mr1888042qka.346.1665026135979; Wed, 05 Oct 2022 20:15:35 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026135; cv=none; d=google.com; s=arc-20160816; b=YEC/DQr3szFIkdcR8cORqxP7Eub2m+6blVrGnptqE2AysQGvRiCh3K8toKmWuGav3S 1R7ME6Oj8QmNdw2AFizZomJEWuuzR0jgIXpwHOsqOmvf0Bqnrgz03zqDn+Gt3914XrWf DqbnyW9Xt5B74yNoFKY8e4lkY67Dy/VfHt/M6X5eBbLl/QmyEbZJVsTqBWcDA11DU/YV fZOS3skdnpCMJJdVeR9c2kq3VSWDNCvvqB8Mn+T1VishZf5OwmU7wfvu1pAa53A1jtdG 2fzXIUK62V4sQYbQ6E4MzJ8dH/o4XHJgsBlYlGKuRoRocrPdaxHloWR8am060iPkL/7y xWUA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=BW3H+5DIKUCdPK57jewRd950NZmPnGc/mycAUjs00JA=; b=KBRwFcaBrM5cs5uNefhsUgQquzMoNy0axnEZaAlad/fQToxHNiLvM8Xa9YHumBAbj9 Te01GunD2kcesZPCM0MGuZOgf+YyFIRpioFKkTpa714J0sEk1hM7mBUCt5MnD6yyvQEt ieAIi0TgGtgzjnNs3e3LBwHn/04VGO8lMlf2atoByQ8aCF8PuAP1wnC/O69bn2F9odQN C6CXQLAJxaIYiUtTixpAp7B2N+S99QtHE7AEv021V+biIfRB/9f69OHecX54xmIB6TEB XP4ztmwBvvsXSPiETQCRGqE3poAhjs22o9Tj9d+uzmpMYHbhoTI2SzehNvzh8SQ5/2rW RWfQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="gst9f/9O"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y10-20020a05620a0e0a00b006af20f95826si6330371qkm.469.2022.10.05.20.15.35 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:15:35 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="gst9f/9O"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59408 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHMF-0002ew-Ef for patch@linaro.org; Wed, 05 Oct 2022 23:15:35 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53266) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIA-0005CO-Ga for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:22 -0400 Received: from mail-pg1-x532.google.com ([2607:f8b0:4864:20::532]:46752) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHI6-0006vf-L5 for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:22 -0400 Received: by mail-pg1-x532.google.com with SMTP id 78so665448pgb.13 for ; Wed, 05 Oct 2022 20:11:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=BW3H+5DIKUCdPK57jewRd950NZmPnGc/mycAUjs00JA=; b=gst9f/9ODEa16c0t1tGt826O5pKk0LNDDm4QgnlgYPesaAWUPHNKq1La3kUKkp5B+S N8AiofqwSkV/G0fRf1tOD6076k6hILxi4G+67lVqx2Cl40ytDg3eZwT3p3NY/BKgbmDD oyRA3k4Y7XnOcC8ZT6IxxPByyTrlVSSpkXJ30tfXGL0S8WhDxgbhjZt8eLrJJrxfygGn Iei7GuMqmAmWO/PrrZPytIPSg3lN/Ngl0MEH2MIT5lJ3esX3fjMD9g2E84q4GHKZkCNC bmXo1ZyN+8oYHlyIzVa8N8W4gSQ4SUjK4F9Qj+5V8RjiHNrpzd5/osNlmBuayqCvpNm3 tmyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=BW3H+5DIKUCdPK57jewRd950NZmPnGc/mycAUjs00JA=; b=VRhVMG6mZoBP/brveQHvfxsMeZX0KZw8FJXRNd18Tktkp4gjLwxPck7rYj04IrtEAd 2VcBwvVErd6paQa40r9y1TAPcI6B3NJruZ7NrICJwUOW5c8NQMr0Dh7Y3gNKKjeJBJcZ C6yi7oWtwpjh/3E0EmWwYg6nIRNUZu0wJL4yTpjM3HYNDN9v5Qj1fkIbMPKf/qt763Ke bzl7jrPiS5bAb4vyPANy7w2ImeZjiQGn+Eg2ydf5ZfIhLocqb/I0pwBwJNhEo2Kmvq73 7qMk3S4niPvi+LsH01LEymNfpnhZN287o+CXM9X3D4MuzVJs4ojSEH5ErFtwv3P/IWvO 0hFA== X-Gm-Message-State: ACrzQf0siHShT2zbzJLBCCCWJTUZe9UNAiwYvJ7gBDTnizQW1GqajX/o FAMyJWJli4EEc/ihvCdm8DtojN6+Ystb8Q== X-Received: by 2002:a05:6a00:140d:b0:52a:d561:d991 with SMTP id l13-20020a056a00140d00b0052ad561d991mr2974605pfu.46.1665025876736; Wed, 05 Oct 2022 20:11:16 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:15 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 01/24] util: Add interval-tree.c Date: Wed, 5 Oct 2022 20:10:50 -0700 Message-Id: <20221006031113.1139454-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::532; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x532.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Copy and simplify the Linux kernel's interval_tree_generic.h, instantiating for uint64_t. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- include/qemu/interval-tree.h | 99 ++++ tests/unit/test-interval-tree.c | 209 ++++++++ util/interval-tree.c | 881 ++++++++++++++++++++++++++++++++ tests/unit/meson.build | 1 + util/meson.build | 1 + 5 files changed, 1191 insertions(+) create mode 100644 include/qemu/interval-tree.h create mode 100644 tests/unit/test-interval-tree.c create mode 100644 util/interval-tree.c diff --git a/include/qemu/interval-tree.h b/include/qemu/interval-tree.h new file mode 100644 index 0000000000..25006debe8 --- /dev/null +++ b/include/qemu/interval-tree.h @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Interval trees. + * + * Derived from include/linux/interval_tree.h and its dependencies. + */ + +#ifndef QEMU_INTERVAL_TREE_H +#define QEMU_INTERVAL_TREE_H + +/* + * For now, don't expose Linux Red-Black Trees separately, but retain the + * separate type definitions to keep the implementation sane, and allow + * the possibility of disentangling them later. + */ +typedef struct RBNode +{ + /* Encodes parent with color in the lsb. */ + uintptr_t rb_parent_color; + struct RBNode *rb_right; + struct RBNode *rb_left; +} RBNode; + +typedef struct RBRoot +{ + RBNode *rb_node; +} RBRoot; + +typedef struct RBRootLeftCached { + RBRoot rb_root; + RBNode *rb_leftmost; +} RBRootLeftCached; + +typedef struct IntervalTreeNode +{ + RBNode rb; + + uint64_t start; /* Start of interval */ + uint64_t last; /* Last location _in_ interval */ + uint64_t subtree_last; +} IntervalTreeNode; + +typedef RBRootLeftCached IntervalTreeRoot; + +/** + * interval_tree_is_empty + * @root: root of the tree. + * + * Returns true if the tree contains no nodes. + */ +static inline bool interval_tree_is_empty(const IntervalTreeRoot *root) +{ + return root->rb_root.rb_node == NULL; +} + +/** + * interval_tree_insert + * @node: node to insert, + * @root: root of the tree. + * + * Insert @node into @root, and rebalance. + */ +void interval_tree_insert(IntervalTreeNode *node, IntervalTreeRoot *root); + +/** + * interval_tree_remove + * @node: node to remove, + * @root: root of the tree. + * + * Remove @node from @root, and rebalance. + */ +void interval_tree_remove(IntervalTreeNode *node, IntervalTreeRoot *root); + +/** + * interval_tree_iter_first: + * @root: root of the tree, + * @start, @last: the inclusive interval [start, last]. + * + * Locate the "first" of a set of nodes within the tree at @root + * that overlap the interval, where "first" is sorted by start. + * Returns NULL if no overlap found. + */ +IntervalTreeNode *interval_tree_iter_first(IntervalTreeRoot *root, + uint64_t start, uint64_t last); + +/** + * interval_tree_iter_next: + * @node: previous search result + * @start, @last: the inclusive interval [start, last]. + * + * Locate the "next" of a set of nodes within the tree that overlap the + * interval; @next is the result of a previous call to + * interval_tree_iter_{first,next}. Returns NULL if @next was the last + * node in the set. + */ +IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, + uint64_t start, uint64_t last); + +#endif /* QEMU_INTERVAL_TREE_H */ diff --git a/tests/unit/test-interval-tree.c b/tests/unit/test-interval-tree.c new file mode 100644 index 0000000000..119817a019 --- /dev/null +++ b/tests/unit/test-interval-tree.c @@ -0,0 +1,209 @@ +/* + * Test interval trees + * + * This work is licensed under the terms of the GNU LGPL, version 2 or later. + * See the COPYING.LIB file in the top-level directory. + * + */ + +#include "qemu/osdep.h" +#include "qemu/interval-tree.h" + +static IntervalTreeNode nodes[20]; +static IntervalTreeRoot root; + +static void rand_interval(IntervalTreeNode *n, uint64_t start, uint64_t last) +{ + gint32 s_ofs, l_ofs, l_max; + + if (last - start > INT32_MAX) { + l_max = INT32_MAX; + } else { + l_max = last - start; + } + s_ofs = g_test_rand_int_range(0, l_max); + l_ofs = g_test_rand_int_range(s_ofs, l_max); + + n->start = start + s_ofs; + n->last = start + l_ofs; +} + +static void test_empty(void) +{ + g_assert(root.rb_root.rb_node == NULL); + g_assert(root.rb_leftmost == NULL); + g_assert(interval_tree_iter_first(&root, 0, UINT64_MAX) == NULL); +} + +static void test_find_one_point(void) +{ + /* Create a tree of a single node, which is the point [1,1]. */ + nodes[0].start = 1; + nodes[0].last = 1; + + interval_tree_insert(&nodes[0], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) == &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) == NULL); + g_assert(interval_tree_iter_first(&root, 0, 0) == NULL); + g_assert(interval_tree_iter_next(&nodes[0], 0, 0) == NULL); + g_assert(interval_tree_iter_first(&root, 0, 1) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 1) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 2) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 2, 2) == NULL); + + interval_tree_remove(&nodes[0], &root); + g_assert(root.rb_root.rb_node == NULL); + g_assert(root.rb_leftmost == NULL); +} + +static void test_find_two_point(void) +{ + IntervalTreeNode *find0, *find1; + + /* Create a tree of a two nodes, which are both the point [1,1]. */ + nodes[0].start = 1; + nodes[0].last = 1; + nodes[1] = nodes[0]; + + interval_tree_insert(&nodes[0], &root); + interval_tree_insert(&nodes[1], &root); + + find0 = interval_tree_iter_first(&root, 0, 9); + g_assert(find0 == &nodes[0] || find0 == &nodes[1]); + + find1 = interval_tree_iter_next(find0, 0, 9); + g_assert(find1 == &nodes[0] || find1 == &nodes[1]); + g_assert(find0 != find1); + + interval_tree_remove(&nodes[1], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) == &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) == NULL); + + interval_tree_remove(&nodes[0], &root); +} + +static void test_find_one_range(void) +{ + /* Create a tree of a single node, which is the range [1,8]. */ + nodes[0].start = 1; + nodes[0].last = 8; + + interval_tree_insert(&nodes[0], &root); + + g_assert(interval_tree_iter_first(&root, 0, 9) == &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 0, 9) == NULL); + g_assert(interval_tree_iter_first(&root, 0, 0) == NULL); + g_assert(interval_tree_iter_first(&root, 0, 1) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 1, 1) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 4, 6) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 8, 8) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 9, 9) == NULL); + + interval_tree_remove(&nodes[0], &root); +} + +static void test_find_one_range_many(void) +{ + int i; + + /* + * Create a tree of many nodes in [0,99] and [200,299], + * but only one node with exactly [110,190]. + */ + nodes[0].start = 110; + nodes[0].last = 190; + + for (i = 1; i < ARRAY_SIZE(nodes) / 2; ++i) { + rand_interval(&nodes[i], 0, 99); + } + for (; i < ARRAY_SIZE(nodes); ++i) { + rand_interval(&nodes[i], 200, 299); + } + + for (i = 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_insert(&nodes[i], &root); + } + + /* Test that we find exactly the one node. */ + g_assert(interval_tree_iter_first(&root, 100, 199) == &nodes[0]); + g_assert(interval_tree_iter_next(&nodes[0], 100, 199) == NULL); + g_assert(interval_tree_iter_first(&root, 100, 109) == NULL); + g_assert(interval_tree_iter_first(&root, 100, 110) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 111, 120) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 111, 199) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 190, 199) == &nodes[0]); + g_assert(interval_tree_iter_first(&root, 192, 199) == NULL); + + /* + * Test that if there are multiple matches, we return the one + * with the minimal start. + */ + g_assert(interval_tree_iter_first(&root, 100, 300) == &nodes[0]); + + /* Test that we don't find it after it is removed. */ + interval_tree_remove(&nodes[0], &root); + g_assert(interval_tree_iter_first(&root, 100, 199) == NULL); + + for (i = 1; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_remove(&nodes[i], &root); + } +} + +static void test_find_many_range(void) +{ + IntervalTreeNode *find; + int i, n; + + n = g_test_rand_int_range(ARRAY_SIZE(nodes) / 3, ARRAY_SIZE(nodes) / 2); + + /* + * Create a fair few nodes in [2000,2999], with the others + * distributed around. + */ + for (i = 0; i < n; ++i) { + rand_interval(&nodes[i], 2000, 2999); + } + for (; i < ARRAY_SIZE(nodes) * 2 / 3; ++i) { + rand_interval(&nodes[i], 1000, 1899); + } + for (; i < ARRAY_SIZE(nodes); ++i) { + rand_interval(&nodes[i], 3100, 3999); + } + + for (i = 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_insert(&nodes[i], &root); + } + + /* Test that we find all of the nodes. */ + find = interval_tree_iter_first(&root, 2000, 2999); + for (i = 0; find != NULL; i++) { + find = interval_tree_iter_next(find, 2000, 2999); + } + g_assert_cmpint(i, ==, n); + + g_assert(interval_tree_iter_first(&root, 0, 999) == NULL); + g_assert(interval_tree_iter_first(&root, 1900, 1999) == NULL); + g_assert(interval_tree_iter_first(&root, 3000, 3099) == NULL); + g_assert(interval_tree_iter_first(&root, 4000, UINT64_MAX) == NULL); + + for (i = 0; i < ARRAY_SIZE(nodes); ++i) { + interval_tree_remove(&nodes[i], &root); + } +} + +int main(int argc, char **argv) +{ + g_test_init(&argc, &argv, NULL); + + g_test_add_func("/interval-tree/empty", test_empty); + g_test_add_func("/interval-tree/find-one-point", test_find_one_point); + g_test_add_func("/interval-tree/find-two-point", test_find_two_point); + g_test_add_func("/interval-tree/find-one-range", test_find_one_range); + g_test_add_func("/interval-tree/find-one-range-many", + test_find_one_range_many); + g_test_add_func("/interval-tree/find-many-range", test_find_many_range); + + return g_test_run(); +} diff --git a/util/interval-tree.c b/util/interval-tree.c new file mode 100644 index 0000000000..9578c05830 --- /dev/null +++ b/util/interval-tree.c @@ -0,0 +1,881 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ + +#include "qemu/osdep.h" +#include "qemu/interval-tree.h" +#include "qemu/atomic.h" + +/* + * Red Black Trees. + * + * For now, don't expose Linux Red-Black Trees separately, but retain the + * separate type definitions to keep the implementation sane, and allow + * the possibility of separating them later. + * + * Derived from include/linux/rbtree_augmented.h and its dependencies. + */ + +/* + * red-black trees properties: https://en.wikipedia.org/wiki/Rbtree + * + * 1) A node is either red or black + * 2) The root is black + * 3) All leaves (NULL) are black + * 4) Both children of every red node are black + * 5) Every simple path from root to leaves contains the same number + * of black nodes. + * + * 4 and 5 give the O(log n) guarantee, since 4 implies you cannot have two + * consecutive red nodes in a path and every red node is therefore followed by + * a black. So if B is the number of black nodes on every simple path (as per + * 5), then the longest possible path due to 4 is 2B. + * + * We shall indicate color with case, where black nodes are uppercase and red + * nodes will be lowercase. Unknown color nodes shall be drawn as red within + * parentheses and have some accompanying text comment. + * + * Notes on lockless lookups: + * + * All stores to the tree structure (rb_left and rb_right) must be done using + * WRITE_ONCE [qatomic_set for QEMU]. And we must not inadvertently cause + * (temporary) loops in the tree structure as seen in program order. + * + * These two requirements will allow lockless iteration of the tree -- not + * correct iteration mind you, tree rotations are not atomic so a lookup might + * miss entire subtrees. + * + * But they do guarantee that any such traversal will only see valid elements + * and that it will indeed complete -- does not get stuck in a loop. + * + * It also guarantees that if the lookup returns an element it is the 'correct' + * one. But not returning an element does _NOT_ mean it's not present. + * + * NOTE: + * + * Stores to __rb_parent_color are not important for simple lookups so those + * are left undone as of now. Nor did I check for loops involving parent + * pointers. + */ + +typedef enum RBColor +{ + RB_RED, + RB_BLACK, +} RBColor; + +typedef struct RBAugmentCallbacks { + void (*propagate)(RBNode *node, RBNode *stop); + void (*copy)(RBNode *old, RBNode *new); + void (*rotate)(RBNode *old, RBNode *new); +} RBAugmentCallbacks; + +static inline RBNode *rb_parent(const RBNode *n) +{ + return (RBNode *)(n->rb_parent_color & ~1); +} + +static inline RBNode *rb_red_parent(const RBNode *n) +{ + return (RBNode *)n->rb_parent_color; +} + +static inline RBColor pc_color(uintptr_t pc) +{ + return (RBColor)(pc & 1); +} + +static inline bool pc_is_red(uintptr_t pc) +{ + return pc_color(pc) == RB_RED; +} + +static inline bool pc_is_black(uintptr_t pc) +{ + return !pc_is_red(pc); +} + +static inline RBColor rb_color(const RBNode *n) +{ + return pc_color(n->rb_parent_color); +} + +static inline bool rb_is_red(const RBNode *n) +{ + return pc_is_red(n->rb_parent_color); +} + +static inline bool rb_is_black(const RBNode *n) +{ + return pc_is_black(n->rb_parent_color); +} + +static inline void rb_set_black(RBNode *n) +{ + n->rb_parent_color |= RB_BLACK; +} + +static inline void rb_set_parent_color(RBNode *n, RBNode *p, RBColor color) +{ + n->rb_parent_color = (uintptr_t)p | color; +} + +static inline void rb_set_parent(RBNode *n, RBNode *p) +{ + rb_set_parent_color(n, p, rb_color(n)); +} + +static inline void rb_link_node(RBNode *node, RBNode *parent, RBNode **rb_link) +{ + node->rb_parent_color = (uintptr_t)parent; + node->rb_left = node->rb_right = NULL; + + qatomic_set(rb_link, node); +} + +static RBNode *rb_next(RBNode *node) +{ + RBNode *parent; + + /* OMIT: if empty node, return null. */ + + /* + * If we have a right-hand child, go down and then left as far as we can. + */ + if (node->rb_right) { + node = node->rb_right; + while (node->rb_left) { + node = node->rb_left; + } + return node; + } + + /* + * No right-hand children. Everything down and left is smaller than us, + * so any 'next' node must be in the general direction of our parent. + * Go up the tree; any time the ancestor is a right-hand child of its + * parent, keep going up. First time it's a left-hand child of its + * parent, said parent is our 'next' node. + */ + while ((parent = rb_parent(node)) && node == parent->rb_right) { + node = parent; + } + + return parent; +} + +static inline void rb_change_child(RBNode *old, RBNode *new, + RBNode *parent, RBRoot *root) +{ + if (!parent) { + qatomic_set(&root->rb_node, new); + } else if (parent->rb_left == old) { + qatomic_set(&parent->rb_left, new); + } else { + qatomic_set(&parent->rb_right, new); + } +} + +static inline void rb_rotate_set_parents(RBNode *old, RBNode *new, + RBRoot *root, RBColor color) +{ + RBNode *parent = rb_parent(old); + + new->rb_parent_color = old->rb_parent_color; + rb_set_parent_color(old, new, color); + rb_change_child(old, new, parent, root); +} + +static void rb_insert_augmented(RBNode *node, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *parent = rb_red_parent(node), *gparent, *tmp; + + while (true) { + /* + * Loop invariant: node is red. + */ + if (unlikely(!parent)) { + /* + * The inserted node is root. Either this is the first node, or + * we recursed at Case 1 below and are no longer violating 4). + */ + rb_set_parent_color(node, NULL, RB_BLACK); + break; + } + + /* + * If there is a black parent, we are done. Otherwise, take some + * corrective action as, per 4), we don't want a red root or two + * consecutive red nodes. + */ + if (rb_is_black(parent)) { + break; + } + + gparent = rb_red_parent(parent); + + tmp = gparent->rb_right; + if (parent != tmp) { /* parent == gparent->rb_left */ + if (tmp && rb_is_red(tmp)) { + /* + * Case 1 - node's uncle is red (color flips). + * + * G g + * / \ / \ + * p u --> P U + * / / + * n n + * + * However, since g's parent might be red, and 4) does not + * allow this, we need to recurse at g. + */ + rb_set_parent_color(tmp, gparent, RB_BLACK); + rb_set_parent_color(parent, gparent, RB_BLACK); + node = gparent; + parent = rb_parent(node); + rb_set_parent_color(node, parent, RB_RED); + continue; + } + + tmp = parent->rb_right; + if (node == tmp) { + /* + * Case 2 - node's uncle is black and node is + * the parent's right child (left rotate at parent). + * + * G G + * / \ / \ + * p U --> n U + * \ / + * n p + * + * This still leaves us in violation of 4), the + * continuation into Case 3 will fix that. + */ + tmp = node->rb_left; + qatomic_set(&parent->rb_right, tmp); + qatomic_set(&node->rb_left, parent); + if (tmp) { + rb_set_parent_color(tmp, parent, RB_BLACK); + } + rb_set_parent_color(parent, node, RB_RED); + augment->rotate(parent, node); + parent = node; + tmp = node->rb_right; + } + + /* + * Case 3 - node's uncle is black and node is + * the parent's left child (right rotate at gparent). + * + * G P + * / \ / \ + * p U --> n g + * / \ + * n U + */ + qatomic_set(&gparent->rb_left, tmp); /* == parent->rb_right */ + qatomic_set(&parent->rb_right, gparent); + if (tmp) { + rb_set_parent_color(tmp, gparent, RB_BLACK); + } + rb_rotate_set_parents(gparent, parent, root, RB_RED); + augment->rotate(gparent, parent); + break; + } else { + tmp = gparent->rb_left; + if (tmp && rb_is_red(tmp)) { + /* Case 1 - color flips */ + rb_set_parent_color(tmp, gparent, RB_BLACK); + rb_set_parent_color(parent, gparent, RB_BLACK); + node = gparent; + parent = rb_parent(node); + rb_set_parent_color(node, parent, RB_RED); + continue; + } + + tmp = parent->rb_left; + if (node == tmp) { + /* Case 2 - right rotate at parent */ + tmp = node->rb_right; + qatomic_set(&parent->rb_left, tmp); + qatomic_set(&node->rb_right, parent); + if (tmp) { + rb_set_parent_color(tmp, parent, RB_BLACK); + } + rb_set_parent_color(parent, node, RB_RED); + augment->rotate(parent, node); + parent = node; + tmp = node->rb_left; + } + + /* Case 3 - left rotate at gparent */ + qatomic_set(&gparent->rb_right, tmp); /* == parent->rb_left */ + qatomic_set(&parent->rb_left, gparent); + if (tmp) { + rb_set_parent_color(tmp, gparent, RB_BLACK); + } + rb_rotate_set_parents(gparent, parent, root, RB_RED); + augment->rotate(gparent, parent); + break; + } + } +} + +static void rb_insert_augmented_cached(RBNode *node, + RBRootLeftCached *root, bool newleft, + const RBAugmentCallbacks *augment) +{ + if (newleft) { + root->rb_leftmost = node; + } + rb_insert_augmented(node, &root->rb_root, augment); +} + +static void rb_erase_color(RBNode *parent, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *node = NULL, *sibling, *tmp1, *tmp2; + + while (true) { + /* + * Loop invariants: + * - node is black (or NULL on first iteration) + * - node is not the root (parent is not NULL) + * - All leaf paths going through parent and node have a + * black node count that is 1 lower than other leaf paths. + */ + sibling = parent->rb_right; + if (node != sibling) { /* node == parent->rb_left */ + if (rb_is_red(sibling)) { + /* + * Case 1 - left rotate at parent + * + * P S + * / \ / \ + * N s --> p Sr + * / \ / \ + * Sl Sr N Sl + */ + tmp1 = sibling->rb_left; + qatomic_set(&parent->rb_right, tmp1); + qatomic_set(&sibling->rb_left, parent); + rb_set_parent_color(tmp1, parent, RB_BLACK); + rb_rotate_set_parents(parent, sibling, root, RB_RED); + augment->rotate(parent, sibling); + sibling = tmp1; + } + tmp1 = sibling->rb_right; + if (!tmp1 || rb_is_black(tmp1)) { + tmp2 = sibling->rb_left; + if (!tmp2 || rb_is_black(tmp2)) { + /* + * Case 2 - sibling color flip + * (p could be either color here) + * + * (p) (p) + * / \ / \ + * N S --> N s + * / \ / \ + * Sl Sr Sl Sr + * + * This leaves us violating 5) which + * can be fixed by flipping p to black + * if it was red, or by recursing at p. + * p is red when coming from Case 1. + */ + rb_set_parent_color(sibling, parent, RB_RED); + if (rb_is_red(parent)) { + rb_set_black(parent); + } else { + node = parent; + parent = rb_parent(node); + if (parent) { + continue; + } + } + break; + } + /* + * Case 3 - right rotate at sibling + * (p could be either color here) + * + * (p) (p) + * / \ / \ + * N S --> N sl + * / \ \ + * sl Sr S + * \ + * Sr + * + * Note: p might be red, and then bot + * p and sl are red after rotation (which + * breaks property 4). This is fixed in + * Case 4 (in rb_rotate_set_parents() + * which set sl the color of p + * and set p RB_BLACK) + * + * (p) (sl) + * / \ / \ + * N sl --> P S + * \ / \ + * S N Sr + * \ + * Sr + */ + tmp1 = tmp2->rb_right; + qatomic_set(&sibling->rb_left, tmp1); + qatomic_set(&tmp2->rb_right, sibling); + qatomic_set(&parent->rb_right, tmp2); + if (tmp1) { + rb_set_parent_color(tmp1, sibling, RB_BLACK); + } + augment->rotate(sibling, tmp2); + tmp1 = sibling; + sibling = tmp2; + } + /* + * Case 4 - left rotate at parent + color flips + * (p and sl could be either color here. + * After rotation, p becomes black, s acquires + * p's color, and sl keeps its color) + * + * (p) (s) + * / \ / \ + * N S --> P Sr + * / \ / \ + * (sl) sr N (sl) + */ + tmp2 = sibling->rb_left; + qatomic_set(&parent->rb_right, tmp2); + qatomic_set(&sibling->rb_left, parent); + rb_set_parent_color(tmp1, sibling, RB_BLACK); + if (tmp2) { + rb_set_parent(tmp2, parent); + } + rb_rotate_set_parents(parent, sibling, root, RB_BLACK); + augment->rotate(parent, sibling); + break; + } else { + sibling = parent->rb_left; + if (rb_is_red(sibling)) { + /* Case 1 - right rotate at parent */ + tmp1 = sibling->rb_right; + qatomic_set(&parent->rb_left, tmp1); + qatomic_set(&sibling->rb_right, parent); + rb_set_parent_color(tmp1, parent, RB_BLACK); + rb_rotate_set_parents(parent, sibling, root, RB_RED); + augment->rotate(parent, sibling); + sibling = tmp1; + } + tmp1 = sibling->rb_left; + if (!tmp1 || rb_is_black(tmp1)) { + tmp2 = sibling->rb_right; + if (!tmp2 || rb_is_black(tmp2)) { + /* Case 2 - sibling color flip */ + rb_set_parent_color(sibling, parent, RB_RED); + if (rb_is_red(parent)) { + rb_set_black(parent); + } else { + node = parent; + parent = rb_parent(node); + if (parent) { + continue; + } + } + break; + } + /* Case 3 - left rotate at sibling */ + tmp1 = tmp2->rb_left; + qatomic_set(&sibling->rb_right, tmp1); + qatomic_set(&tmp2->rb_left, sibling); + qatomic_set(&parent->rb_left, tmp2); + if (tmp1) { + rb_set_parent_color(tmp1, sibling, RB_BLACK); + } + augment->rotate(sibling, tmp2); + tmp1 = sibling; + sibling = tmp2; + } + /* Case 4 - right rotate at parent + color flips */ + tmp2 = sibling->rb_right; + qatomic_set(&parent->rb_left, tmp2); + qatomic_set(&sibling->rb_right, parent); + rb_set_parent_color(tmp1, sibling, RB_BLACK); + if (tmp2) { + rb_set_parent(tmp2, parent); + } + rb_rotate_set_parents(parent, sibling, root, RB_BLACK); + augment->rotate(parent, sibling); + break; + } + } +} + +static void rb_erase_augmented(RBNode *node, RBRoot *root, + const RBAugmentCallbacks *augment) +{ + RBNode *child = node->rb_right; + RBNode *tmp = node->rb_left; + RBNode *parent, *rebalance; + uintptr_t pc; + + if (!tmp) { + /* + * Case 1: node to erase has no more than 1 child (easy!) + * + * Note that if there is one child it must be red due to 5) + * and node must be black due to 4). We adjust colors locally + * so as to bypass rb_erase_color() later on. + */ + pc = node->rb_parent_color; + parent = rb_parent(node); + rb_change_child(node, child, parent, root); + if (child) { + child->rb_parent_color = pc; + rebalance = NULL; + } else { + rebalance = pc_is_black(pc) ? parent : NULL; + } + tmp = parent; + } else if (!child) { + /* Still case 1, but this time the child is node->rb_left */ + pc = node->rb_parent_color; + parent = rb_parent(node); + tmp->rb_parent_color = pc; + rb_change_child(node, tmp, parent, root); + rebalance = NULL; + tmp = parent; + } else { + RBNode *successor = child, *child2; + tmp = child->rb_left; + if (!tmp) { + /* + * Case 2: node's successor is its right child + * + * (n) (s) + * / \ / \ + * (x) (s) -> (x) (c) + * \ + * (c) + */ + parent = successor; + child2 = successor->rb_right; + + augment->copy(node, successor); + } else { + /* + * Case 3: node's successor is leftmost under + * node's right child subtree + * + * (n) (s) + * / \ / \ + * (x) (y) -> (x) (y) + * / / + * (p) (p) + * / / + * (s) (c) + * \ + * (c) + */ + do { + parent = successor; + successor = tmp; + tmp = tmp->rb_left; + } while (tmp); + child2 = successor->rb_right; + qatomic_set(&parent->rb_left, child2); + qatomic_set(&successor->rb_right, child); + rb_set_parent(child, successor); + + augment->copy(node, successor); + augment->propagate(parent, successor); + } + + tmp = node->rb_left; + qatomic_set(&successor->rb_left, tmp); + rb_set_parent(tmp, successor); + + pc = node->rb_parent_color; + tmp = rb_parent(node); + rb_change_child(node, successor, tmp, root); + + if (child2) { + rb_set_parent_color(child2, parent, RB_BLACK); + rebalance = NULL; + } else { + rebalance = rb_is_black(successor) ? parent : NULL; + } + successor->rb_parent_color = pc; + tmp = successor; + } + + augment->propagate(tmp, NULL); + + if (rebalance) { + rb_erase_color(rebalance, root, augment); + } +} + +static void rb_erase_augmented_cached(RBNode *node, RBRootLeftCached *root, + const RBAugmentCallbacks *augment) +{ + if (root->rb_leftmost == node) { + root->rb_leftmost = rb_next(node); + } + rb_erase_augmented(node, &root->rb_root, augment); +} + + +/* + * Interval trees. + * + * Derived from lib/interval_tree.c and its dependencies, + * especially include/linux/interval_tree_generic.h. + */ + +#define rb_to_itree(N) container_of(N, IntervalTreeNode, rb) + +static bool interval_tree_compute_max(IntervalTreeNode *node, bool exit) +{ + IntervalTreeNode *child; + uint64_t max = node->last; + + if (node->rb.rb_left) { + child = rb_to_itree(node->rb.rb_left); + if (child->subtree_last > max) { + max = child->subtree_last; + } + } + if (node->rb.rb_right) { + child = rb_to_itree(node->rb.rb_right); + if (child->subtree_last > max) { + max = child->subtree_last; + } + } + if (exit && node->subtree_last == max) { + return true; + } + node->subtree_last = max; + return false; +} + +static void interval_tree_propagate(RBNode *rb, RBNode *stop) +{ + while (rb != stop) { + IntervalTreeNode *node = rb_to_itree(rb); + if (interval_tree_compute_max(node, true)) { + break; + } + rb = rb_parent(&node->rb); + } +} + +static void interval_tree_copy(RBNode *rb_old, RBNode *rb_new) +{ + IntervalTreeNode *old = rb_to_itree(rb_old); + IntervalTreeNode *new = rb_to_itree(rb_new); + + new->subtree_last = old->subtree_last; +} + +static void interval_tree_rotate(RBNode *rb_old, RBNode *rb_new) +{ + IntervalTreeNode *old = rb_to_itree(rb_old); + IntervalTreeNode *new = rb_to_itree(rb_new); + + new->subtree_last = old->subtree_last; + interval_tree_compute_max(old, false); +} + +static const RBAugmentCallbacks interval_tree_augment = { + .propagate = interval_tree_propagate, + .copy = interval_tree_copy, + .rotate = interval_tree_rotate, +}; + +/* Insert / remove interval nodes from the tree */ +void interval_tree_insert(IntervalTreeNode *node, IntervalTreeRoot *root) +{ + RBNode **link = &root->rb_root.rb_node, *rb_parent = NULL; + uint64_t start = node->start, last = node->last; + IntervalTreeNode *parent; + bool leftmost = true; + + while (*link) { + rb_parent = *link; + parent = rb_to_itree(rb_parent); + + if (parent->subtree_last < last) { + parent->subtree_last = last; + } + if (start < parent->start) { + link = &parent->rb.rb_left; + } else { + link = &parent->rb.rb_right; + leftmost = false; + } + } + + node->subtree_last = last; + rb_link_node(&node->rb, rb_parent, link); + rb_insert_augmented_cached(&node->rb, root, leftmost, + &interval_tree_augment); +} + +void interval_tree_remove(IntervalTreeNode *node, IntervalTreeRoot *root) +{ + rb_erase_augmented_cached(&node->rb, root, &interval_tree_augment); +} + +/* + * Iterate over intervals intersecting [start;last] + * + * Note that a node's interval intersects [start;last] iff: + * Cond1: node->start <= last + * and + * Cond2: start <= node->last + */ + +static IntervalTreeNode *interval_tree_subtree_search(IntervalTreeNode *node, + uint64_t start, + uint64_t last) +{ + while (true) { + /* + * Loop invariant: start <= node->subtree_last + * (Cond2 is satisfied by one of the subtree nodes) + */ + if (node->rb.rb_left) { + IntervalTreeNode *left = rb_to_itree(node->rb.rb_left); + + if (start <= left->subtree_last) { + /* + * Some nodes in left subtree satisfy Cond2. + * Iterate to find the leftmost such node N. + * If it also satisfies Cond1, that's the + * match we are looking for. Otherwise, there + * is no matching interval as nodes to the + * right of N can't satisfy Cond1 either. + */ + node = left; + continue; + } + } + if (node->start <= last) { /* Cond1 */ + if (start <= node->last) { /* Cond2 */ + return node; /* node is leftmost match */ + } + if (node->rb.rb_right) { + node = rb_to_itree(node->rb.rb_right); + if (start <= node->subtree_last) { + continue; + } + } + } + return NULL; /* no match */ + } +} + +IntervalTreeNode *interval_tree_iter_first(IntervalTreeRoot *root, + uint64_t start, uint64_t last) +{ + IntervalTreeNode *node, *leftmost; + + if (!root->rb_root.rb_node) { + return NULL; + } + + /* + * Fastpath range intersection/overlap between A: [a0, a1] and + * B: [b0, b1] is given by: + * + * a0 <= b1 && b0 <= a1 + * + * ... where A holds the lock range and B holds the smallest + * 'start' and largest 'last' in the tree. For the later, we + * rely on the root node, which by augmented interval tree + * property, holds the largest value in its last-in-subtree. + * This allows mitigating some of the tree walk overhead for + * for non-intersecting ranges, maintained and consulted in O(1). + */ + node = rb_to_itree(root->rb_root.rb_node); + if (node->subtree_last < start) { + return NULL; + } + + leftmost = rb_to_itree(root->rb_leftmost); + if (leftmost->start > last) { + return NULL; + } + + return interval_tree_subtree_search(node, start, last); +} + +IntervalTreeNode *interval_tree_iter_next(IntervalTreeNode *node, + uint64_t start, uint64_t last) +{ + RBNode *rb = node->rb.rb_right, *prev; + + while (true) { + /* + * Loop invariants: + * Cond1: node->start <= last + * rb == node->rb.rb_right + * + * First, search right subtree if suitable + */ + if (rb) { + IntervalTreeNode *right = rb_to_itree(rb); + + if (start <= right->subtree_last) { + return interval_tree_subtree_search(right, start, last); + } + } + + /* Move up the tree until we come from a node's left child */ + do { + rb = rb_parent(&node->rb); + if (!rb) { + return NULL; + } + prev = &node->rb; + node = rb_to_itree(rb); + rb = node->rb.rb_right; + } while (prev == rb); + + /* Check if the node intersects [start;last] */ + if (last < node->start) { /* !Cond1 */ + return NULL; + } + if (start <= node->last) { /* Cond2 */ + return node; + } + } +} + +#if 1 +static void debug_interval_tree_int(IntervalTreeNode *node, + const char *dir, int level) +{ + printf("%4d %*s %s [%" PRId64 ",%" PRId64 "] subtree_last:%" PRId64 "\n", + level, level + 1, dir, rb_is_red(&node->rb) ? "r" : "b", + node->start, node->last, node->subtree_last); + + if (node->rb.rb_left) { + debug_interval_tree_int(rb_to_itree(node->rb.rb_left), "<", level + 1); + } + if (node->rb.rb_right) { + debug_interval_tree_int(rb_to_itree(node->rb.rb_right), ">", level + 1); + } +} + +void debug_interval_tree(IntervalTreeNode *node); +void debug_interval_tree(IntervalTreeNode *node) +{ + if (node) { + debug_interval_tree_int(node, "*", 0); + } else { + printf("null\n"); + } +} +#endif diff --git a/tests/unit/meson.build b/tests/unit/meson.build index b497a41378..ffa444f432 100644 --- a/tests/unit/meson.build +++ b/tests/unit/meson.build @@ -47,6 +47,7 @@ tests = { 'ptimer-test': ['ptimer-test-stubs.c', meson.project_source_root() / 'hw/core/ptimer.c'], 'test-qapi-util': [], 'test-smp-parse': [qom, meson.project_source_root() / 'hw/core/machine-smp.c'], + 'test-interval-tree': [], } if have_system or have_tools diff --git a/util/meson.build b/util/meson.build index 5e282130df..46a0d017a6 100644 --- a/util/meson.build +++ b/util/meson.build @@ -55,6 +55,7 @@ util_ss.add(files('guest-random.c')) util_ss.add(files('yank.c')) util_ss.add(files('int128.c')) util_ss.add(files('memalign.c')) +util_ss.add(files('interval-tree.c')) if have_user util_ss.add(files('selfmap.c')) From patchwork Thu Oct 6 03:10:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612840 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1189835pvb; Wed, 5 Oct 2022 20:12:39 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5RfO1Ro3Ku6VZXW2F7VwhuTukCFh0d4aCy/ANsqQxiD4e3L8QS3pWgyWH0jhSYpJwNuUZt X-Received: by 2002:a05:620a:1b98:b0:6ce:9f9e:6d4 with SMTP id dv24-20020a05620a1b9800b006ce9f9e06d4mr1925059qkb.542.1665025959825; Wed, 05 Oct 2022 20:12:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665025959; cv=none; d=google.com; s=arc-20160816; b=AnTBna+fURBnq/STT+hL2LSFFy5CC3Kk/jvEjMo266+6+L05/mKM+H/NCugyHcI1L1 POdFG4XrhCRVubJ3Fiq/KSQXxsU+dcuoSyT0v1WksLuqbppYboZ90+FuZihMSt/fOaIf 7TKOP19Pc0SmaPQiX98LIb8hc7TJKxofhB9+0VjaF3HPHxYia7JVbNLZZL8bWGNEfQV7 NLfJwRX1jxxvyJUt+7uBl5Gj+8AnRX0HP0zrtWHudxc2uhR9jRnLTuZmDB88IwuuqG+6 JetjgaQ3b4c3M0cBvcWsihXC2Nr/feCmLLf4oTibfU1LY0z1RyvqdOUXR3o+QjlsrHUX TvbQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=GNx/wZFYP3b5pXqqccbxReodi1Cirhgoe15NhYoUnUs=; b=i1rwfhDZywQH8+aJrcnvU8nQg9sdiKXTQ2myOnah6iGoZl3SYAdwwe8OVlaYGtvM/q 3V5omYVHLAh5eyg9rbsmOjQsjOW4VrUyZB0NbK9aGFDaXxK0y/Gwf3GuUmV20w8gB/8x oVkFwXoeGMPMxczasiZ8jQL1caf30UkH1BNN9LXaVeW7axJ9QuGZ9j4/e4ErQGEfVtFZ Krerho/6gECagmimLLWdRPG3nSyidgcVkSGB3nmGSu8AFxnI3xZeoPNBVHmXd85O5www EwNln3gOLEl9eI2jAsCLwauxg6eS4Wb7lPvIb1QM0xL2Q72ggkfgDHQDxSfUNGCPgKnZ l9pQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=s1+LYIqD; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 7-20020a0562140d6700b004a33e35bf11si2901011qvs.112.2022.10.05.20.12.39 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:12:39 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=s1+LYIqD; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42146 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHJP-0005DT-9m for patch@linaro.org; Wed, 05 Oct 2022 23:12:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53264) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHI9-0005BE-NX for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:21 -0400 Received: from mail-pj1-x102a.google.com ([2607:f8b0:4864:20::102a]:37504) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHI7-0006vn-L9 for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:21 -0400 Received: by mail-pj1-x102a.google.com with SMTP id p3-20020a17090a284300b0020a85fa3ffcso3143070pjf.2 for ; Wed, 05 Oct 2022 20:11:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=GNx/wZFYP3b5pXqqccbxReodi1Cirhgoe15NhYoUnUs=; b=s1+LYIqD+rV3i6IsETTLOP2oNElBn0LmkiV3JGqNeWov79OBS3jgsd53xW1KP5hyyp le0gvc63XBkxcWa1SxWITLiBPdq14YA3jWABnvPV2X0ZnSUjISN/rFdxKChMWg941I1g y5+d8N63MivO+SQKtBClbWaSn896FasmDxwLmvEIrC4dJclzRjrKEwGuz3WxaD5jam6t rkVYmFLwLK/CUcdiEgPaSRk3IMMiZjzWYXZlyGox8FQcLTOepA6vplzQqnpsGErU5mPM k2xJT2DF7EpHcwlMFampCKsbubJYQL+ERBO2mDqES1deFHqgp4QTA8oWZQhotsw3Cv9j /1TA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=GNx/wZFYP3b5pXqqccbxReodi1Cirhgoe15NhYoUnUs=; b=Fp5kdfF00sAdbYijwFU7KzIaq2yWN5e49UCQk30Uu+iGH1lYcb80oqbfSCNTBI96Bj 5hIxFdDEP+7bQIFeNbwZ9ZE64jg5HJ1NcI8OPTaGUxxsJ4jisjTL8s6qhT+jqJ9jdGIJ PnuDO29Zo5WRE+OfKMb7Np5bSZbF3jzmGOub4Dang0j+1bcLj1HzLLxIYwRPMcTW0ZHE WL4Z7EfqKOjC/OUf2MqWFOb3mnn8nKapELrurpWiWxJZdmg+rIz6MZClfLPcDNknmcP+ cAunfPzpp/2DQ4q4FJp7SmNpPF6DmOHHIU+4g2P4vtcvd1+9xRd+obNe/xTiq72U7op6 fjuA== X-Gm-Message-State: ACrzQf3KqecpeV8FHPq7vzSgaegpbBSjiIlZ6Hw/vctG64kRwPBL1TN9 1bK/OjLC0ibolv6LL5AK87uc7AVYZlhSuA== X-Received: by 2002:a17:90b:3b81:b0:202:597a:c71d with SMTP id pc1-20020a17090b3b8100b00202597ac71dmr8066080pjb.105.1665025878171; Wed, 05 Oct 2022 20:11:18 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:17 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 02/24] accel/tcg: Make page_alloc_target_data allocation constant Date: Wed, 5 Oct 2022 20:10:51 -0700 Message-Id: <20221006031113.1139454-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::102a; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Use a constant target data allocation size for all pages. This will be necessary to reduce overhead of page tracking. Since TARGET_PAGE_DATA_SIZE is now required, we can use this to omit data tracking for targets that don't require it. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- include/exec/cpu-all.h | 9 ++++----- target/arm/cpu.h | 8 ++++++++ target/arm/internals.h | 4 ---- accel/tcg/translate-all.c | 8 ++++++-- target/arm/mte_helper.c | 3 +-- 5 files changed, 19 insertions(+), 13 deletions(-) diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h index 16b7df41bf..854adc4ac2 100644 --- a/include/exec/cpu-all.h +++ b/include/exec/cpu-all.h @@ -281,19 +281,18 @@ void page_reset_target_data(target_ulong start, target_ulong end); int page_check_range(target_ulong start, target_ulong len, int flags); /** - * page_alloc_target_data(address, size) + * page_alloc_target_data(address) * @address: guest virtual address - * @size: size of data to allocate * - * Allocate @size bytes of out-of-band data to associate with the - * guest page at @address. If the page is not mapped, NULL will + * Allocate TARGET_PAGE_DATA_SIZE bytes of out-of-band data to associate + * with the guest page at @address. If the page is not mapped, NULL will * be returned. If there is existing data associated with @address, * no new memory will be allocated. * * The memory will be freed when the guest page is deallocated, * e.g. with the munmap system call. */ -void *page_alloc_target_data(target_ulong address, size_t size); +void *page_alloc_target_data(target_ulong address); /** * page_get_target_data(address) diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 429ed42eec..2f44f7afc7 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -3423,6 +3423,14 @@ static inline MemTxAttrs *typecheck_memtxattrs(MemTxAttrs *x) #define PAGE_MTE PAGE_TARGET_2 #define PAGE_TARGET_STICKY PAGE_MTE +/* We associate one allocation tag per 16 bytes, the minimum. */ +#define LOG2_TAG_GRANULE 4 +#define TAG_GRANULE (1 << LOG2_TAG_GRANULE) + +#ifdef CONFIG_USER_ONLY +#define TARGET_PAGE_DATA_SIZE (TARGET_PAGE_SIZE >> (LOG2_TAG_GRANULE + 1)) +#endif + #ifdef TARGET_TAGGED_ADDRESSES /** * cpu_untagged_addr: diff --git a/target/arm/internals.h b/target/arm/internals.h index 307a596505..d94efbaf5b 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1165,10 +1165,6 @@ void arm_log_exception(CPUState *cs); */ #define GMID_EL1_BS 6 -/* We associate one allocation tag per 16 bytes, the minimum. */ -#define LOG2_TAG_GRANULE 4 -#define TAG_GRANULE (1 << LOG2_TAG_GRANULE) - /* * SVE predicates are 1/8 the size of SVE vectors, and cannot use * the same simd_desc() encoding due to restrictions on size. diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 4ed75a13e1..64a2601f9f 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -2271,6 +2271,7 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) void page_reset_target_data(target_ulong start, target_ulong end) { +#ifdef TARGET_PAGE_DATA_SIZE target_ulong addr, len; /* @@ -2293,15 +2294,17 @@ void page_reset_target_data(target_ulong start, target_ulong end) g_free(p->target_data); p->target_data = NULL; } +#endif } +#ifdef TARGET_PAGE_DATA_SIZE void *page_get_target_data(target_ulong address) { PageDesc *p = page_find(address >> TARGET_PAGE_BITS); return p ? p->target_data : NULL; } -void *page_alloc_target_data(target_ulong address, size_t size) +void *page_alloc_target_data(target_ulong address) { PageDesc *p = page_find(address >> TARGET_PAGE_BITS); void *ret = NULL; @@ -2309,11 +2312,12 @@ void *page_alloc_target_data(target_ulong address, size_t size) if (p->flags & PAGE_VALID) { ret = p->target_data; if (!ret) { - p->target_data = ret = g_malloc0(size); + p->target_data = ret = g_malloc0(TARGET_PAGE_DATA_SIZE); } } return ret; } +#endif /* TARGET_PAGE_DATA_SIZE */ int page_check_range(target_ulong start, target_ulong len, int flags) { diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c index fdd23ab3f8..62d36c127f 100644 --- a/target/arm/mte_helper.c +++ b/target/arm/mte_helper.c @@ -96,8 +96,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, tags = page_get_target_data(clean_ptr); if (tags == NULL) { - size_t alloc_size = TARGET_PAGE_SIZE >> (LOG2_TAG_GRANULE + 1); - tags = page_alloc_target_data(clean_ptr, alloc_size); + tags = page_alloc_target_data(clean_ptr); assert(tags != NULL); } From patchwork Thu Oct 6 03:10:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612841 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1189840pvb; Wed, 5 Oct 2022 20:12:40 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6cRQoXfZAFSwyGz/pWMZAGFWvKvBP1Lp/88kcr5IMkOsLE4v0/3wea9eeZWmd3AmRbilEA X-Received: by 2002:a05:620a:146d:b0:6cd:f206:ed1e with SMTP id j13-20020a05620a146d00b006cdf206ed1emr1839088qkl.424.1665025960364; Wed, 05 Oct 2022 20:12:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665025960; cv=none; d=google.com; s=arc-20160816; b=PEqqX5Wgh7P6SiY1Zuzb4QiDiomm1GeWy/AaZe1rM4HCtryAw0ku+tx90CAXqjF3I7 bwb5ee3dA7fF0OS9wsy9+G0/4iAtlVoz1l9m0ySdw52whHnd42ORj9sszNddxcH9l/TR quh9T20qU04teTRl0nnfMSV/xbHYHgKQ2qRHT1CE84KTWU9Wme/UXVjXF1OibQYhbyl2 WZR5ZzJPObdUX5+Lc9a+AbJ7N9y+9cuFwT5lJd4MFTTuqd0YPreHT0juAIZZZFgi/vu5 uPCl/vcCII4DgfSiK7fYhLxoJ3SH5BlUgqZ8uwiPQ6HzGhYHGKLuAYxptCP9oTovICen p0Yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=1w8RFkCDZDZZBnVIiCx0QMrja2OHHbx0CNptzru5zOk=; b=MasaphW4dwfbiwfPXCYmZ5E16MDd0V/+hMcR8BAQTYiVXLlUwDzrFZKwCbV2IbwEUt 7TzlWkpjYbmoNGrHJmU4ZzAgkTSpLCAz2RJEiQa/Q+15glLT0ZgARdMhDDKKC1AEN2ZJ W7r/IAgVTVEbIJnIikS8NN8Ia8HgbgYiJAX46k0zqfrd0y/bSZsVaZEOfhXrJFRcDaJ6 p+JpvbQK6pw2E6PWLLAOSP9UOC83nmF2ChNz6BadSqKunvL+47i9swz+nox4CM9R5XxR wXWUrYCD7hyuyaTJLaiwZJpFvnNwJ3bcgfXrA1u9Dwxlna6lOxkkJDavUT4S0BFbCTBN hgdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=HX9n+ztM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c19-20020ac84e13000000b0035c04d941e3si6419864qtw.86.2022.10.05.20.12.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:12:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=HX9n+ztM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42148 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHJP-0005FU-S1 for patch@linaro.org; Wed, 05 Oct 2022 23:12:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53268) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIB-0005DG-0x for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:23 -0400 Received: from mail-pg1-x531.google.com ([2607:f8b0:4864:20::531]:38855) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHI8-0006vy-Mf for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:22 -0400 Received: by mail-pg1-x531.google.com with SMTP id 129so709141pgc.5 for ; Wed, 05 Oct 2022 20:11:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=1w8RFkCDZDZZBnVIiCx0QMrja2OHHbx0CNptzru5zOk=; b=HX9n+ztMBM6YxQK9yLEZcJDtgLnHnmUMk3oAch+wngbvtLrK9Zdn8isCieIj7KP+gt XzjdzRtBi7XY/LZhlsGGlV1XFECCzcBSiDUoXFSkkNp55M1r5GAhbU8S1DvS0CkodNHk rYLbuSPGPueQqDgmmG47VkG64cha7cp/zzQef1XKqTPplS7M5z6mOXLR635Up3pNihEb 4NKwJXgfK6J4RTm1lb/8Ujtcv8+Ig3e0zprx1Lxf6Ti+My/wiNzfdEXm9ACLuvBosa5b stmgKAcjixThneB0KU9X3oH4MVTIFLL5VqRY2FxxedP0unVHuzvVOnIwAh7/qbnwpgjv HMhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=1w8RFkCDZDZZBnVIiCx0QMrja2OHHbx0CNptzru5zOk=; b=UaZGxLkmN/VokL04FHfXnCWfpEM2LbOg6Boy6ueRgvucRNhKCppkYk16J25GMd0WCs Lgwdg9hjz7z0A0A/nmsJ2xKVBlPEFu2pmGCw53YtOeWDCC5NqJpPxhGks5rL8rY+d8PV CZ0axSFSYqRSuG/uP9ZyJz3RVt4XFr4hAA0REIFDNuMT+6vKHggT53hT1KAdFKeuEpd0 s0iffSaAWi7RejdCSREBBL+VTUeo5KSowIBV2BSc7HbnsWDhnxaO5Ycr7WFbdm9e09Ci uQ7TFy7sibyjMS7lxPRHj2pJamzjVioHhFMszzjWSM+l9zOT9ktOyyFNleM74Y8Y9BFe ZULA== X-Gm-Message-State: ACrzQf17XaeY7Mf+GiY+l4YZ5F3sJqZxBF6yPoQd++P+UzB46NO7DLWl 3aWquZRcIy6PF+45byEl981XnJ/De847Lg== X-Received: by 2002:a63:89c3:0:b0:452:f2ad:52b9 with SMTP id v186-20020a6389c3000000b00452f2ad52b9mr2498054pgd.105.1665025879365; Wed, 05 Oct 2022 20:11:19 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:18 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 03/24] accel/tcg: Remove disabled debug in translate-all.c Date: Wed, 5 Oct 2022 20:10:52 -0700 Message-Id: <20221006031113.1139454-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::531; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x531.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" These items printf, and could be replaced with proper tracepoints if we really cared. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/translate-all.c | 109 -------------------------------------- 1 file changed, 109 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 64a2601f9f..42385fa032 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -63,33 +63,7 @@ #include "tb-context.h" #include "internal.h" -/* #define DEBUG_TB_INVALIDATE */ -/* #define DEBUG_TB_FLUSH */ /* make various TB consistency checks */ -/* #define DEBUG_TB_CHECK */ - -#ifdef DEBUG_TB_INVALIDATE -#define DEBUG_TB_INVALIDATE_GATE 1 -#else -#define DEBUG_TB_INVALIDATE_GATE 0 -#endif - -#ifdef DEBUG_TB_FLUSH -#define DEBUG_TB_FLUSH_GATE 1 -#else -#define DEBUG_TB_FLUSH_GATE 0 -#endif - -#if !defined(CONFIG_USER_ONLY) -/* TB consistency checks only implemented for usermode emulation. */ -#undef DEBUG_TB_CHECK -#endif - -#ifdef DEBUG_TB_CHECK -#define DEBUG_TB_CHECK_GATE 1 -#else -#define DEBUG_TB_CHECK_GATE 0 -#endif /* Access to the various translations structures need to be serialised via locks * for consistency. @@ -940,15 +914,6 @@ static void page_flush_tb(void) } } -static gboolean tb_host_size_iter(gpointer key, gpointer value, gpointer data) -{ - const TranslationBlock *tb = value; - size_t *size = data; - - *size += tb->tc.size; - return false; -} - /* flush all the translation blocks */ static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count) { @@ -963,15 +928,6 @@ static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count) } did_flush = true; - if (DEBUG_TB_FLUSH_GATE) { - size_t nb_tbs = tcg_nb_tbs(); - size_t host_size = 0; - - tcg_tb_foreach(tb_host_size_iter, &host_size); - printf("qemu: flush code_size=%zu nb_tbs=%zu avg_tb_size=%zu\n", - tcg_code_size(), nb_tbs, nb_tbs > 0 ? host_size / nb_tbs : 0); - } - CPU_FOREACH(cpu) { tcg_flush_jmp_cache(cpu); } @@ -1005,57 +961,6 @@ void tb_flush(CPUState *cpu) } } -/* - * Formerly ifdef DEBUG_TB_CHECK. These debug functions are user-mode-only, - * so in order to prevent bit rot we compile them unconditionally in user-mode, - * and let the optimizer get rid of them by wrapping their user-only callers - * with if (DEBUG_TB_CHECK_GATE). - */ -#ifdef CONFIG_USER_ONLY - -static void do_tb_invalidate_check(void *p, uint32_t hash, void *userp) -{ - TranslationBlock *tb = p; - target_ulong addr = *(target_ulong *)userp; - - if (!(addr + TARGET_PAGE_SIZE <= tb_pc(tb) || - addr >= tb_pc(tb) + tb->size)) { - printf("ERROR invalidate: address=" TARGET_FMT_lx - " PC=%08lx size=%04x\n", addr, (long)tb_pc(tb), tb->size); - } -} - -/* verify that all the pages have correct rights for code - * - * Called with mmap_lock held. - */ -static void tb_invalidate_check(target_ulong address) -{ - address &= TARGET_PAGE_MASK; - qht_iter(&tb_ctx.htable, do_tb_invalidate_check, &address); -} - -static void do_tb_page_check(void *p, uint32_t hash, void *userp) -{ - TranslationBlock *tb = p; - int flags1, flags2; - - flags1 = page_get_flags(tb_pc(tb)); - flags2 = page_get_flags(tb_pc(tb) + tb->size - 1); - if ((flags1 & PAGE_WRITE) || (flags2 & PAGE_WRITE)) { - printf("ERROR page flags: PC=%08lx size=%04x f1=%x f2=%x\n", - (long)tb_pc(tb), tb->size, flags1, flags2); - } -} - -/* verify that all the pages have correct rights for code */ -static void tb_page_check(void) -{ - qht_iter(&tb_ctx.htable, do_tb_page_check, NULL); -} - -#endif /* CONFIG_USER_ONLY */ - /* * user-mode: call with mmap_lock held * !user-mode: call with @pd->lock held @@ -1339,12 +1244,6 @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, page_unlock(p2); } page_unlock(p); - -#ifdef CONFIG_USER_ONLY - if (DEBUG_TB_CHECK_GATE) { - tb_page_check(); - } -#endif return tb; } @@ -2400,9 +2299,6 @@ void page_protect(tb_page_addr_t page_addr) } mprotect(g2h_untagged(page_addr), qemu_host_page_size, (prot & PAGE_BITS) & ~PAGE_WRITE); - if (DEBUG_TB_INVALIDATE_GATE) { - printf("protecting code page: 0x" TB_PAGE_ADDR_FMT "\n", page_addr); - } } } @@ -2458,11 +2354,6 @@ int page_unprotect(target_ulong address, uintptr_t pc) /* and since the content will be modified, we must invalidate the corresponding translated code. */ current_tb_invalidated |= tb_invalidate_phys_page(addr, pc); -#ifdef CONFIG_USER_ONLY - if (DEBUG_TB_CHECK_GATE) { - tb_invalidate_check(addr); - } -#endif } mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, prot & PAGE_BITS); From patchwork Thu Oct 6 03:10:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612849 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1192877pvb; Wed, 5 Oct 2022 20:20:27 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4CTKBesFnbrDhtSSNCrjr9q5xhd1vUiP+Czsuw5o0zMkEIPY9ThcYUlP4tr3hpnYuu2FqV X-Received: by 2002:a05:620a:8087:b0:6e1:a1e4:93ca with SMTP id ef7-20020a05620a808700b006e1a1e493camr1851781qkb.153.1665026427768; Wed, 05 Oct 2022 20:20:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026427; cv=none; d=google.com; s=arc-20160816; b=DzYtwdfEtbZ5C47tVbL7nW+1EBmsXAmykqWdaZXC1E6b5Jguku0g13DiQNch1tVATq jCOsAG5qmsr0wAcneV93ebiFsXEIFvxXRmI0a4Ng5JopEqY6BbK7b7Rp7cwQurhsCMEp 9puHsXjElponB3pklqua4F7swFDHIUd3IJsb+usFock31OR+ZVuTAXUxuqoZ0aSv+q1Q 0ZGOztXFr6NfKYgGj4gYUzcEd17eUcHBwWVP4Hiaalnh6WXNAJoPFiVEo/sfjfBblYrf H7MHcHPFN1+NVKH4QldGy/0HhV3CjwNxYLhiqs2X5YhbtKVSYlLgRMPokQgbY1oXbCRo nMXw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=764J2nU23cdJoOfmrYRPgvCCBrqv6qNC3BvDbeQIYOQ=; b=uTgBDDvCCKVMnQHkHDJuvmRCRs1ITLfvqFOvT+tmkXQWl/mrKtim80TzvvoeXjCmLK tUc5VB50BfXScD52DQLYCsalFdT57qrZz37/YQR8lSKvSw2oypvQhmtvkWq6JgCfnRZm h9MCOox3Yq7zzE+ryUy3ZDJBKiBamTCDPq5qOVesFYXxyEhVFa7YhxNIWqmZIJbdaRqS mtst71GiFA4arCRN03v+duYEw1rga7MoSJJ4IuQn9FNPQDMMLnWO6k6PFdpF+Zb3ZM9T ySx4mytDl4hhTOVGNFN0hH4H2pIKivmcqMthy8wApqSQCMqdOuNw4K8ioua16csLysdw aQqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PHjpzG8r; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r17-20020ac87ef1000000b0038c8b0b06d4si1988292qtc.370.2022.10.05.20.20.27 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:20:27 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=PHjpzG8r; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:52874 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHQw-0000Ql-KP for patch@linaro.org; Wed, 05 Oct 2022 23:20:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:53270) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIC-0005FV-NR for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:24 -0400 Received: from mail-pf1-x436.google.com ([2607:f8b0:4864:20::436]:43774) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHI9-0006w3-Tb for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:23 -0400 Received: by mail-pf1-x436.google.com with SMTP id 204so841031pfx.10 for ; Wed, 05 Oct 2022 20:11:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=764J2nU23cdJoOfmrYRPgvCCBrqv6qNC3BvDbeQIYOQ=; b=PHjpzG8rfue97AMar/CgW/bleEdc4lDiJDVst1X2QZOL+NfXkp/UdooUbE80Nsidvq 0t/zrqDVq43x/yxFLHrdHa4J7c/jxAJk5qq3ylifPrTCjnMwpwK56SQqnJw3VHxY+QeX k+vB4D9iRofUNP7KOCzOCF2TnEQFFSE8g3p8MlE7/vyhoOgWz9aInYEEu41leEqaZmKk 812CTwdEBxsk9feX/I5tQQgd65PgOOXvEMsf9y3b2qTvi+15ndKMYbpRg8KOCypqUHu6 Iuc8GkBYtq7IoYzWO8cHofDbnTejokEmIK4bLUTVNvzQXnDFN0kxcki/FJPFvmZtidEA zHrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=764J2nU23cdJoOfmrYRPgvCCBrqv6qNC3BvDbeQIYOQ=; b=edHtsc9z2128boD7pwExR1r9ALvdUDGZ+5406niw6xGcFQX4iH0SyvTCQbImRtHzN1 J/f9W9eMARQRomJGjfDqW2m+V8dqBukjEyGrfbA8lltYvzvd4dv4Sd+zRoZ4rofM8mBy 31S1OFIGUYyQyhASHMszh8BKIDGKvPI7+RMH5ZNJG/g2XnhmQHuSIuG/2iwyhRaKoHKQ O188YpdN/7B32Tn0/rXBd1uFKGOW+fH1suzXCZIaDluJLGykikbm9ON7NA1uTDLOg2Ir 5EXXuWt20bCDjvsVd8JO2W/OQXpHlmAvaLq0+3DvqhI4USWYGpWte7ki8ojVm8luqoL6 BYqA== X-Gm-Message-State: ACrzQf35E3srmI80znBUs1Pu37SArk6oStXt2h9XE8AcpJP3tKGzI8VV //+u7auSweussNwycbDp9iQV2Djr9P2+Mg== X-Received: by 2002:a05:6a00:22d6:b0:543:7003:21ab with SMTP id f22-20020a056a0022d600b00543700321abmr2941791pfj.54.1665025880629; Wed, 05 Oct 2022 20:11:20 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:19 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 04/24] accel/tcg: Split out PageDesc to internal.h Date: Wed, 5 Oct 2022 20:10:53 -0700 Message-Id: <20221006031113.1139454-5-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::436; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x436.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/internal.h | 31 +++++++++++++++++++++++++++++++ accel/tcg/translate-all.c | 31 +------------------------------ 2 files changed, 32 insertions(+), 30 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index dc800fd485..62da49ed52 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -11,6 +11,37 @@ #include "exec/exec-all.h" +/* + * Access to the various translations structures need to be serialised + * via locks for consistency. In user-mode emulation access to the + * memory related structures are protected with mmap_lock. + * In !user-mode we use per-page locks. + */ +#ifdef CONFIG_SOFTMMU +#define assert_memory_lock() +#else +#define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) +#endif + +typedef struct PageDesc { + /* list of TBs intersecting this ram page */ + uintptr_t first_tb; +#ifdef CONFIG_USER_ONLY + unsigned long flags; + void *target_data; +#endif +#ifdef CONFIG_SOFTMMU + QemuSpin lock; +#endif +} PageDesc; + +PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); + +static inline PageDesc *page_find(tb_page_addr_t index) +{ + return page_find_alloc(index, false); +} + TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, uint32_t flags, int cflags); diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 42385fa032..86848c6743 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -65,30 +65,6 @@ /* make various TB consistency checks */ -/* Access to the various translations structures need to be serialised via locks - * for consistency. - * In user-mode emulation access to the memory related structures are protected - * with mmap_lock. - * In !user-mode we use per-page locks. - */ -#ifdef CONFIG_SOFTMMU -#define assert_memory_lock() -#else -#define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) -#endif - -typedef struct PageDesc { - /* list of TBs intersecting this ram page */ - uintptr_t first_tb; -#ifdef CONFIG_USER_ONLY - unsigned long flags; - void *target_data; -#endif -#ifdef CONFIG_SOFTMMU - QemuSpin lock; -#endif -} PageDesc; - /** * struct page_entry - page descriptor entry * @pd: pointer to the &struct PageDesc of the page this entry represents @@ -445,7 +421,7 @@ void page_init(void) #endif } -static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) +PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) { PageDesc *pd; void **lp; @@ -511,11 +487,6 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) return pd + (index & (V_L2_SIZE - 1)); } -static inline PageDesc *page_find(tb_page_addr_t index) -{ - return page_find_alloc(index, false); -} - static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc); From patchwork Thu Oct 6 03:10:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612843 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1190998pvb; Wed, 5 Oct 2022 20:15:21 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4eLyHiXFNu9TNa6bHmRRq+3OBad49pWe9Dei2WD0iwIZnfluPIIrD4MuVmywnG8EFHCtcS X-Received: by 2002:a05:6214:d0d:b0:4b1:996d:c200 with SMTP id 13-20020a0562140d0d00b004b1996dc200mr2155321qvh.9.1665026121790; Wed, 05 Oct 2022 20:15:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026121; cv=none; d=google.com; s=arc-20160816; b=gQNlMlcwnFVpfdRW+77t1MePjkagO8rL0PpOKIS83oPTfGBy/L71KjbQESTPJuQ8O2 leY1GKR1bgnngHVq0RvkUXdoDKMHgVNPcYVig7qDs5wcXVAweXMXUkjlH4A+z3ZjrQEY gDlo2yrfKuuAA6bHCZdPYSrGJRR813XLx4RnKA6zMBmeusrdkfiJzE0+UEh1E1Czv7C+ pCh7R6Runz5Q85R4l+Sn8tr8D/VEBvlxIh7dT1oDXjmb6UsS82+N5YQPnmPIW95+eemm tw1e7IRff+y1zBukVuUhNs5ht2erLcp//4kJzWl1iUI2lQkOoEa4UJ8PE3bMxv1N6KDQ Mm+A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=sDiaF30K3/Y/0Ly7gGIN7QLvUdhADskzkDy0CghP+O0=; b=amthPrzm1mHmioCFmo+tYpH0UmWbxTOOHmNCJsi9MunVF7T0ynxJagxfSX1U6NY7gC FN3RBjnFG5a0OqLs5xMzWQLePHdqQEvwCLhM2iJc64HxSDRvRQeehmmu6dvRL+Wx9X1g bOMCGSWFBFTr6E3afZN4ZtSqXGkaZ5gTc9vmvlfk6df/RInQ4oV2mihWOsviyxv2tf8x a3m1QYfZaXhwu3ITSJHpFoesWYpu4pTq0elGdzsuQXPEVIyZwfOyiTUMe4eG6XWqm3du assHH2+JXTu8kJT1N19BBmSOULSK9AFP+2HvYNpvt1nn7HXesYvjotP/wjXC2iUPk5Jk 8DzA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=oEpHJcIq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id gm10-20020a056214268a00b004b184e24be6si3097806qvb.84.2022.10.05.20.15.21 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:15:21 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=oEpHJcIq; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:45058 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHM0-0002JF-Vi for patch@linaro.org; Wed, 05 Oct 2022 23:15:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48126) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIH-0005Vg-Nf for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:29 -0400 Received: from mail-pg1-x530.google.com ([2607:f8b0:4864:20::530]:41514) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIC-0006wa-Kp for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:29 -0400 Received: by mail-pg1-x530.google.com with SMTP id q9so695334pgq.8 for ; Wed, 05 Oct 2022 20:11:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=sDiaF30K3/Y/0Ly7gGIN7QLvUdhADskzkDy0CghP+O0=; b=oEpHJcIqkxCSefa1jCTlBhEKyIfOabVVVG3DSHIuyuMCO4P36EDm55nRCjX1oy7ioX b8qfQI+hiepHqWE0HBfeTx7OAfPM+ZNPBJR0a4vf9kxcWzi1Yhjg44lZKQFdK6Qkn+RI tgQ3FW6T2fgr+LE2w8jj4XS21B5rbLcS/Pj8LejT2EGgIh1pMgTPyq7AoRajP37kwU0E 5SiKQcCRb2fpQD1RMMlm7BsVhnKE/etdqYHc+V2F3SoJ3gpvdLmbsbYLHcVtSdNNjIER /9Q0/dQFQCFF0RTcary+4L+r87gRaiEDUDi9X6niwp75u6nPD70FNQ82jCqoI+1VSxtd B3Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=sDiaF30K3/Y/0Ly7gGIN7QLvUdhADskzkDy0CghP+O0=; b=hAYToHe2GlKEAZXxWm34a5RAW++PHwgoWOcCe9IeEkSniLug0Gup31dl6aYdE3Lm4m T8E/DpFU0oVVusM5zNcz7bf2cV7jJWmqzqCoinCamjLzoiJ4de68/1WKUiUjRW/suuT9 KgsHWhMqPMwCQpsc9jl+wWOzD82EhSnrOHJwZ771ObJ8xldxaPbxOpwpRPr0fovnFYco Jeg+24japTmvV9uLEtqdO4ah0C7UIMedmB5kcsF9BkbUlL3QWyWob9aH4Roao6pLzcfG BRJb37tRg37uUXId1aFWQDEfOUCvo4Z0SXyHUa8zoVmC8J08uhGZUkh+2LF7lcEGWcbw 9IBw== X-Gm-Message-State: ACrzQf3sfm94R5lkmwKFPUbo9jOKPwjhOugOM9dxgs3x6PK/qrQX3gQc m6c2V+LIKUNObWlkNdI3PiGa8pwjE9bP4A== X-Received: by 2002:a05:6a00:288c:b0:561:9a3a:bca1 with SMTP id ch12-20020a056a00288c00b005619a3abca1mr2880943pfb.46.1665025881819; Wed, 05 Oct 2022 20:11:21 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:21 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 05/24] accel/tcg: Split out tb-maint.c Date: Wed, 5 Oct 2022 20:10:54 -0700 Message-Id: <20221006031113.1139454-6-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::530; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x530.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Move all of the TranslationBlock flushing and page linking code from translate-all.c to tb-maint.c. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/internal.h | 55 +++ accel/tcg/tb-maint.c | 735 ++++++++++++++++++++++++++++++++++++ accel/tcg/translate-all.c | 766 +------------------------------------- accel/tcg/meson.build | 1 + 4 files changed, 802 insertions(+), 755 deletions(-) create mode 100644 accel/tcg/tb-maint.c diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 62da49ed52..a77b110b78 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -35,6 +35,27 @@ typedef struct PageDesc { #endif } PageDesc; +/* Size of the L2 (and L3, etc) page tables. */ +#define V_L2_BITS 10 +#define V_L2_SIZE (1 << V_L2_BITS) + +/* + * L1 Mapping properties + */ +extern int v_l1_size; +extern int v_l1_shift; +extern int v_l2_levels; + +/* + * The bottom level has pointers to PageDesc, and is indexed by + * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. + */ +#define V_L1_MIN_BITS 4 +#define V_L1_MAX_BITS (V_L2_BITS + 3) +#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) + +extern void *l1_map[V_L1_MAX_SIZE]; + PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); static inline PageDesc *page_find(tb_page_addr_t index) @@ -42,12 +63,46 @@ static inline PageDesc *page_find(tb_page_addr_t index) return page_find_alloc(index, false); } +/* list iterators for lists of tagged pointers in TranslationBlock */ +#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ + for (n = (head) & 1, tb = (TranslationBlock *)((head) & ~1); \ + tb; tb = (TranslationBlock *)tb->field[n], n = (uintptr_t)tb & 1, \ + tb = (TranslationBlock *)((uintptr_t)tb & ~1)) + +#define PAGE_FOR_EACH_TB(pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + +#define TB_FOR_EACH_JMP(head_tb, tb, n) \ + TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) + +/* In user-mode page locks aren't used; mmap_lock is enough */ +#ifdef CONFIG_USER_ONLY +#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) +static inline void page_lock(PageDesc *pd) { } +static inline void page_unlock(PageDesc *pd) { } +#else +#ifdef CONFIG_DEBUG_TCG +void do_assert_page_locked(const PageDesc *pd, const char *file, int line); +#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE__) +#else +#define assert_page_locked(pd) +#endif +void page_lock(PageDesc *pd); +void page_unlock(PageDesc *pd); +#endif + TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, uint32_t flags, int cflags); G_NORETURN void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr); void page_init(void); void tb_htable_init(void); +void tb_reset_jump(TranslationBlock *tb, int n); +TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, + tb_page_addr_t phys_page2); +bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc); +int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb, + uintptr_t searched_pc, bool reset_icount); /* Return the current PC from CPU, which may be cached in TB. */ static inline target_ulong log_pc(CPUState *cpu, const TranslationBlock *tb) diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c new file mode 100644 index 0000000000..66c1900ae6 --- /dev/null +++ b/accel/tcg/tb-maint.c @@ -0,0 +1,735 @@ +/* + * Translation Block Maintaince + * + * Copyright (c) 2003 Fabrice Bellard + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + */ + +#include "qemu/osdep.h" +#include "exec/cputlb.h" +#include "exec/log.h" +#include "exec/translate-all.h" +#include "sysemu/tcg.h" +#include "tcg/tcg.h" +#include "tb-hash.h" +#include "tb-context.h" +#include "internal.h" + +/* FIXME: tb_invalidate_phys_range is declared in different places. */ +#ifdef CONFIG_USER_ONLY +#include "exec/exec-all.h" +#else +#include "exec/ram_addr.h" +#endif + +static bool tb_cmp(const void *ap, const void *bp) +{ + const TranslationBlock *a = ap; + const TranslationBlock *b = bp; + + return ((TARGET_TB_PCREL || tb_pc(a) == tb_pc(b)) && + a->cs_base == b->cs_base && + a->flags == b->flags && + (tb_cflags(a) & ~CF_INVALID) == (tb_cflags(b) & ~CF_INVALID) && + a->trace_vcpu_dstate == b->trace_vcpu_dstate && + a->page_addr[0] == b->page_addr[0] && + a->page_addr[1] == b->page_addr[1]); +} + +void tb_htable_init(void) +{ + unsigned int mode = QHT_MODE_AUTO_RESIZE; + + qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); +} + +/* Set to NULL all the 'first_tb' fields in all PageDescs. */ +static void page_flush_tb_1(int level, void **lp) +{ + int i; + + if (*lp == NULL) { + return; + } + if (level == 0) { + PageDesc *pd = *lp; + + for (i = 0; i < V_L2_SIZE; ++i) { + page_lock(&pd[i]); + pd[i].first_tb = (uintptr_t)NULL; + page_unlock(&pd[i]); + } + } else { + void **pp = *lp; + + for (i = 0; i < V_L2_SIZE; ++i) { + page_flush_tb_1(level - 1, pp + i); + } + } +} + +static void page_flush_tb(void) +{ + int i, l1_sz = v_l1_size; + + for (i = 0; i < l1_sz; i++) { + page_flush_tb_1(v_l2_levels, l1_map + i); + } +} + +/* flush all the translation blocks */ +static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count) +{ + bool did_flush = false; + + mmap_lock(); + /* If it is already been done on request of another CPU, just retry. */ + if (tb_ctx.tb_flush_count != tb_flush_count.host_int) { + goto done; + } + did_flush = true; + + CPU_FOREACH(cpu) { + tcg_flush_jmp_cache(cpu); + } + + qht_reset_size(&tb_ctx.htable, CODE_GEN_HTABLE_SIZE); + page_flush_tb(); + + tcg_region_reset_all(); + /* XXX: flush processor icache at this point if cache flush is expensive */ + qatomic_mb_set(&tb_ctx.tb_flush_count, tb_ctx.tb_flush_count + 1); + +done: + mmap_unlock(); + if (did_flush) { + qemu_plugin_flush_cb(); + } +} + +void tb_flush(CPUState *cpu) +{ + if (tcg_enabled()) { + unsigned tb_flush_count = qatomic_mb_read(&tb_ctx.tb_flush_count); + + if (cpu_in_exclusive_context(cpu)) { + do_tb_flush(cpu, RUN_ON_CPU_HOST_INT(tb_flush_count)); + } else { + async_safe_run_on_cpu(cpu, do_tb_flush, + RUN_ON_CPU_HOST_INT(tb_flush_count)); + } + } +} + +/* + * user-mode: call with mmap_lock held + * !user-mode: call with @pd->lock held + */ +static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) +{ + TranslationBlock *tb1; + uintptr_t *pprev; + unsigned int n1; + + assert_page_locked(pd); + pprev = &pd->first_tb; + PAGE_FOR_EACH_TB(pd, tb1, n1) { + if (tb1 == tb) { + *pprev = tb1->page_next[n1]; + return; + } + pprev = &tb1->page_next[n1]; + } + g_assert_not_reached(); +} + +/* remove @orig from its @n_orig-th jump list */ +static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_orig) +{ + uintptr_t ptr, ptr_locked; + TranslationBlock *dest; + TranslationBlock *tb; + uintptr_t *pprev; + int n; + + /* mark the LSB of jmp_dest[] so that no further jumps can be inserted */ + ptr = qatomic_or_fetch(&orig->jmp_dest[n_orig], 1); + dest = (TranslationBlock *)(ptr & ~1); + if (dest == NULL) { + return; + } + + qemu_spin_lock(&dest->jmp_lock); + /* + * While acquiring the lock, the jump might have been removed if the + * destination TB was invalidated; check again. + */ + ptr_locked = qatomic_read(&orig->jmp_dest[n_orig]); + if (ptr_locked != ptr) { + qemu_spin_unlock(&dest->jmp_lock); + /* + * The only possibility is that the jump was unlinked via + * tb_jump_unlink(dest). Seeing here another destination would be a bug, + * because we set the LSB above. + */ + g_assert(ptr_locked == 1 && dest->cflags & CF_INVALID); + return; + } + /* + * We first acquired the lock, and since the destination pointer matches, + * we know for sure that @orig is in the jmp list. + */ + pprev = &dest->jmp_list_head; + TB_FOR_EACH_JMP(dest, tb, n) { + if (tb == orig && n == n_orig) { + *pprev = tb->jmp_list_next[n]; + /* no need to set orig->jmp_dest[n]; setting the LSB was enough */ + qemu_spin_unlock(&dest->jmp_lock); + return; + } + pprev = &tb->jmp_list_next[n]; + } + g_assert_not_reached(); +} + +/* + * Reset the jump entry 'n' of a TB so that it is not chained to another TB. + */ +void tb_reset_jump(TranslationBlock *tb, int n) +{ + uintptr_t addr = (uintptr_t)(tb->tc.ptr + tb->jmp_reset_offset[n]); + tb_set_jmp_target(tb, n, addr); +} + +/* remove any jumps to the TB */ +static inline void tb_jmp_unlink(TranslationBlock *dest) +{ + TranslationBlock *tb; + int n; + + qemu_spin_lock(&dest->jmp_lock); + + TB_FOR_EACH_JMP(dest, tb, n) { + tb_reset_jump(tb, n); + qatomic_and(&tb->jmp_dest[n], (uintptr_t)NULL | 1); + /* No need to clear the list entry; setting the dest ptr is enough */ + } + dest->jmp_list_head = (uintptr_t)NULL; + + qemu_spin_unlock(&dest->jmp_lock); +} + +static void tb_jmp_cache_inval_tb(TranslationBlock *tb) +{ + CPUState *cpu; + + if (TARGET_TB_PCREL) { + /* A TB may be at any virtual address */ + CPU_FOREACH(cpu) { + tcg_flush_jmp_cache(cpu); + } + } else { + uint32_t h = tb_jmp_cache_hash_func(tb_pc(tb)); + + CPU_FOREACH(cpu) { + CPUJumpCache *jc = cpu->tb_jmp_cache; + + if (qatomic_read(&jc->array[h].tb) == tb) { + qatomic_set(&jc->array[h].tb, NULL); + } + } + } +} + +/* + * In user-mode, call with mmap_lock held. + * In !user-mode, if @rm_from_page_list is set, call with the TB's pages' + * locks held. + */ +static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list) +{ + PageDesc *p; + uint32_t h; + tb_page_addr_t phys_pc; + uint32_t orig_cflags = tb_cflags(tb); + + assert_memory_lock(); + + /* make sure no further incoming jumps will be chained to this TB */ + qemu_spin_lock(&tb->jmp_lock); + qatomic_set(&tb->cflags, tb->cflags | CF_INVALID); + qemu_spin_unlock(&tb->jmp_lock); + + /* remove the TB from the hash list */ + phys_pc = tb->page_addr[0]; + h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)), + tb->flags, orig_cflags, tb->trace_vcpu_dstate); + if (!qht_remove(&tb_ctx.htable, tb, h)) { + return; + } + + /* remove the TB from the page list */ + if (rm_from_page_list) { + p = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); + tb_page_remove(p, tb); + if (tb->page_addr[1] != -1) { + p = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + tb_page_remove(p, tb); + } + } + + /* remove the TB from the hash list */ + tb_jmp_cache_inval_tb(tb); + + /* suppress this TB from the two jump lists */ + tb_remove_from_jmp_list(tb, 0); + tb_remove_from_jmp_list(tb, 1); + + /* suppress any remaining jumps to this TB */ + tb_jmp_unlink(tb); + + qatomic_set(&tb_ctx.tb_phys_invalidate_count, + tb_ctx.tb_phys_invalidate_count + 1); +} + +static void tb_phys_invalidate__locked(TranslationBlock *tb) +{ + qemu_thread_jit_write(); + do_tb_phys_invalidate(tb, true); + qemu_thread_jit_execute(); +} + +static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc) +{ + PageDesc *p1, *p2; + tb_page_addr_t page1; + tb_page_addr_t page2; + + assert_memory_lock(); + g_assert(phys1 != -1); + + page1 = phys1 >> TARGET_PAGE_BITS; + page2 = phys2 >> TARGET_PAGE_BITS; + + p1 = page_find_alloc(page1, alloc); + if (ret_p1) { + *ret_p1 = p1; + } + if (likely(phys2 == -1)) { + page_lock(p1); + return; + } else if (page1 == page2) { + page_lock(p1); + if (ret_p2) { + *ret_p2 = p1; + } + return; + } + p2 = page_find_alloc(page2, alloc); + if (ret_p2) { + *ret_p2 = p2; + } + if (page1 < page2) { + page_lock(p1); + page_lock(p2); + } else { + page_lock(p2); + page_lock(p1); + } +} + +#ifdef CONFIG_USER_ONLY +static inline void page_lock_tb(const TranslationBlock *tb) { } +static inline void page_unlock_tb(const TranslationBlock *tb) { } +#else +/* lock the page(s) of a TB in the correct acquisition order */ +static void page_lock_tb(const TranslationBlock *tb) +{ + page_lock_pair(NULL, tb->page_addr[0], NULL, tb->page_addr[1], false); +} + +static void page_unlock_tb(const TranslationBlock *tb) +{ + PageDesc *p1 = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); + + page_unlock(p1); + if (unlikely(tb->page_addr[1] != -1)) { + PageDesc *p2 = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + + if (p2 != p1) { + page_unlock(p2); + } + } +} +#endif + +/* + * Invalidate one TB. + * Called with mmap_lock held in user-mode. + */ +void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr) +{ + if (page_addr == -1 && tb->page_addr[0] != -1) { + page_lock_tb(tb); + do_tb_phys_invalidate(tb, true); + page_unlock_tb(tb); + } else { + do_tb_phys_invalidate(tb, false); + } +} + +/* + * Add the tb in the target page and protect it if necessary. + * Called with mmap_lock held for user-mode emulation. + * Called with @p->lock held in !user-mode. + */ +static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, + unsigned int n, tb_page_addr_t page_addr) +{ +#ifndef CONFIG_USER_ONLY + bool page_already_protected; +#endif + + assert_page_locked(p); + + tb->page_addr[n] = page_addr; + tb->page_next[n] = p->first_tb; +#ifndef CONFIG_USER_ONLY + page_already_protected = p->first_tb != (uintptr_t)NULL; +#endif + p->first_tb = (uintptr_t)tb | n; + +#if defined(CONFIG_USER_ONLY) + /* translator_loop() must have made all TB pages non-writable */ + assert(!(p->flags & PAGE_WRITE)); +#else + /* + * If some code is already present, then the pages are already + * protected. So we handle the case where only the first TB is + * allocated in a physical page. + */ + if (!page_already_protected) { + tlb_protect_code(page_addr); + } +#endif +} + +/* + * Add a new TB and link it to the physical page tables. phys_page2 is + * (-1) to indicate that only one page contains the TB. + * + * Called with mmap_lock held for user-mode emulation. + * + * Returns a pointer @tb, or a pointer to an existing TB that matches @tb. + * Note that in !user-mode, another thread might have already added a TB + * for the same block of guest code that @tb corresponds to. In that case, + * the caller should discard the original @tb, and use instead the returned TB. + */ +TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, + tb_page_addr_t phys_page2) +{ + PageDesc *p; + PageDesc *p2 = NULL; + void *existing_tb = NULL; + uint32_t h; + + assert_memory_lock(); + tcg_debug_assert(!(tb->cflags & CF_INVALID)); + + /* + * Add the TB to the page list, acquiring first the pages's locks. + * We keep the locks held until after inserting the TB in the hash table, + * so that if the insertion fails we know for sure that the TBs are still + * in the page descriptors. + * Note that inserting into the hash table first isn't an option, since + * we can only insert TBs that are fully initialized. + */ + page_lock_pair(&p, phys_pc, &p2, phys_page2, true); + tb_page_add(p, tb, 0, phys_pc); + if (p2) { + tb_page_add(p2, tb, 1, phys_page2); + } else { + tb->page_addr[1] = -1; + } + + /* add in the hash table */ + h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)), + tb->flags, tb->cflags, tb->trace_vcpu_dstate); + qht_insert(&tb_ctx.htable, tb, h, &existing_tb); + + /* remove TB from the page(s) if we couldn't insert it */ + if (unlikely(existing_tb)) { + tb_page_remove(p, tb); + if (p2) { + tb_page_remove(p2, tb); + } + tb = existing_tb; + } + + if (p2 && p2 != p) { + page_unlock(p2); + } + page_unlock(p); + return tb; +} + +/* + * @p must be non-NULL. + * user-mode: call with mmap_lock held. + * !user-mode: call with all @pages locked. + */ +static void +tb_invalidate_phys_page_range__locked(struct page_collection *pages, + PageDesc *p, tb_page_addr_t start, + tb_page_addr_t end, + uintptr_t retaddr) +{ + TranslationBlock *tb; + tb_page_addr_t tb_start, tb_end; + int n; +#ifdef TARGET_HAS_PRECISE_SMC + CPUState *cpu = current_cpu; + CPUArchState *env = NULL; + bool current_tb_not_found = retaddr != 0; + bool current_tb_modified = false; + TranslationBlock *current_tb = NULL; + target_ulong current_pc = 0; + target_ulong current_cs_base = 0; + uint32_t current_flags = 0; +#endif /* TARGET_HAS_PRECISE_SMC */ + + assert_page_locked(p); + +#if defined(TARGET_HAS_PRECISE_SMC) + if (cpu != NULL) { + env = cpu->env_ptr; + } +#endif + + /* + * We remove all the TBs in the range [start, end[. + * XXX: see if in some cases it could be faster to invalidate all the code + */ + PAGE_FOR_EACH_TB(p, tb, n) { + assert_page_locked(p); + /* NOTE: this is subtle as a TB may span two physical pages */ + if (n == 0) { + /* NOTE: tb_end may be after the end of the page, but + it is not a problem */ + tb_start = tb->page_addr[0]; + tb_end = tb_start + tb->size; + } else { + tb_start = tb->page_addr[1]; + tb_end = tb_start + ((tb->page_addr[0] + tb->size) + & ~TARGET_PAGE_MASK); + } + if (!(tb_end <= start || tb_start >= end)) { +#ifdef TARGET_HAS_PRECISE_SMC + if (current_tb_not_found) { + current_tb_not_found = false; + /* now we have a real cpu fault */ + current_tb = tcg_tb_lookup(retaddr); + } + if (current_tb == tb && + (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) { + /* + * If we are modifying the current TB, we must stop + * its execution. We could be more precise by checking + * that the modification is after the current PC, but it + * would require a specialized function to partially + * restore the CPU state. + */ + current_tb_modified = true; + cpu_restore_state_from_tb(cpu, current_tb, retaddr, true); + cpu_get_tb_cpu_state(env, ¤t_pc, ¤t_cs_base, + ¤t_flags); + } +#endif /* TARGET_HAS_PRECISE_SMC */ + tb_phys_invalidate__locked(tb); + } + } +#if !defined(CONFIG_USER_ONLY) + /* if no code remaining, no need to continue to use slow writes */ + if (!p->first_tb) { + tlb_unprotect_code(start); + } +#endif +#ifdef TARGET_HAS_PRECISE_SMC + if (current_tb_modified) { + page_collection_unlock(pages); + /* Force execution of one insn next time. */ + cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu); + mmap_unlock(); + cpu_loop_exit_noexc(cpu); + } +#endif +} + +/* + * Invalidate all TBs which intersect with the target physical address range + * [start;end[. NOTE: start and end must refer to the *same* physical page. + * 'is_cpu_write_access' should be true if called from a real cpu write + * access: the virtual CPU will exit the current TB if code is modified inside + * this TB. + * + * Called with mmap_lock held for user-mode emulation + */ +void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end) +{ + struct page_collection *pages; + PageDesc *p; + + assert_memory_lock(); + + p = page_find(start >> TARGET_PAGE_BITS); + if (p == NULL) { + return; + } + pages = page_collection_lock(start, end); + tb_invalidate_phys_page_range__locked(pages, p, start, end, 0); + page_collection_unlock(pages); +} + +/* + * Invalidate all TBs which intersect with the target physical address range + * [start;end[. NOTE: start and end may refer to *different* physical pages. + * 'is_cpu_write_access' should be true if called from a real cpu write + * access: the virtual CPU will exit the current TB if code is modified inside + * this TB. + * + * Called with mmap_lock held for user-mode emulation. + */ +#ifdef CONFIG_SOFTMMU +void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end) +#else +void tb_invalidate_phys_range(target_ulong start, target_ulong end) +#endif +{ + struct page_collection *pages; + tb_page_addr_t next; + + assert_memory_lock(); + + pages = page_collection_lock(start, end); + for (next = (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; + start < end; + start = next, next += TARGET_PAGE_SIZE) { + PageDesc *pd = page_find(start >> TARGET_PAGE_BITS); + tb_page_addr_t bound = MIN(next, end); + + if (pd == NULL) { + continue; + } + tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0); + } + page_collection_unlock(pages); +} + +#ifdef CONFIG_SOFTMMU +/* + * len must be <= 8 and start must be a multiple of len. + * Called via softmmu_template.h when code areas are written to with + * iothread mutex not held. + * + * Call with all @pages in the range [@start, @start + len[ locked. + */ +void tb_invalidate_phys_page_fast(struct page_collection *pages, + tb_page_addr_t start, int len, + uintptr_t retaddr) +{ + PageDesc *p; + + assert_memory_lock(); + + p = page_find(start >> TARGET_PAGE_BITS); + if (!p) { + return; + } + + assert_page_locked(p); + tb_invalidate_phys_page_range__locked(pages, p, start, start + len, + retaddr); +} +#else +/* + * Called with mmap_lock held. If pc is not 0 then it indicates the + * host PC of the faulting store instruction that caused this invalidate. + * Returns true if the caller needs to abort execution of the current + * TB (because it was modified by this store and the guest CPU has + * precise-SMC semantics). + */ +bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc) +{ + TranslationBlock *tb; + PageDesc *p; + int n; +#ifdef TARGET_HAS_PRECISE_SMC + TranslationBlock *current_tb = NULL; + CPUState *cpu = current_cpu; + CPUArchState *env = NULL; + int current_tb_modified = 0; + target_ulong current_pc = 0; + target_ulong current_cs_base = 0; + uint32_t current_flags = 0; +#endif + + assert_memory_lock(); + + addr &= TARGET_PAGE_MASK; + p = page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + return false; + } + +#ifdef TARGET_HAS_PRECISE_SMC + if (p->first_tb && pc != 0) { + current_tb = tcg_tb_lookup(pc); + } + if (cpu != NULL) { + env = cpu->env_ptr; + } +#endif + assert_page_locked(p); + PAGE_FOR_EACH_TB(p, tb, n) { +#ifdef TARGET_HAS_PRECISE_SMC + if (current_tb == tb && + (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) { + /* + * If we are modifying the current TB, we must stop its execution. + * We could be more precise by checking that the modification is + * after the current PC, but it would require a specialized + * function to partially restore the CPU state. + */ + current_tb_modified = 1; + cpu_restore_state_from_tb(cpu, current_tb, pc, true); + cpu_get_tb_cpu_state(env, ¤t_pc, ¤t_cs_base, + ¤t_flags); + } +#endif /* TARGET_HAS_PRECISE_SMC */ + tb_phys_invalidate(tb, addr); + } + p->first_tb = (uintptr_t)NULL; +#ifdef TARGET_HAS_PRECISE_SMC + if (current_tb_modified) { + /* Force execution of one insn next time. */ + cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu); + return true; + } +#endif + + return false; +} +#endif diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 86848c6743..5e28e9fccd 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -109,18 +109,6 @@ struct page_collection { struct page_entry *max; }; -/* list iterators for lists of tagged pointers in TranslationBlock */ -#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ - for (n = (head) & 1, tb = (TranslationBlock *)((head) & ~1); \ - tb; tb = (TranslationBlock *)tb->field[n], n = (uintptr_t)tb & 1, \ - tb = (TranslationBlock *)((uintptr_t)tb & ~1)) - -#define PAGE_FOR_EACH_TB(pagedesc, tb, n) \ - TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) - -#define TB_FOR_EACH_JMP(head_tb, tb, n) \ - TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) - /* * In system mode we want L1_MAP to be based on ram offsets, * while in user mode we want it to be based on virtual addresses. @@ -138,10 +126,6 @@ struct page_collection { # define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) #endif -/* Size of the L2 (and L3, etc) page tables. */ -#define V_L2_BITS 10 -#define V_L2_SIZE (1 << V_L2_BITS) - /* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) @@ -150,18 +134,11 @@ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > /* * L1 Mapping properties */ -static int v_l1_size; -static int v_l1_shift; -static int v_l2_levels; +int v_l1_size; +int v_l1_shift; +int v_l2_levels; -/* The bottom level has pointers to PageDesc, and is indexed by - * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. - */ -#define V_L1_MIN_BITS 4 -#define V_L1_MAX_BITS (V_L2_BITS + 3) -#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) - -static void *l1_map[V_L1_MAX_SIZE]; +void *l1_map[V_L1_MAX_SIZE]; TBContext tb_ctx; @@ -274,8 +251,8 @@ static int encode_search(TranslationBlock *tb, uint8_t *block) * When reset_icount is true, current TB will be interrupted and * icount should be recalculated. */ -static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb, - uintptr_t searched_pc, bool reset_icount) +int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb, + uintptr_t searched_pc, bool reset_icount) { target_ulong data[TARGET_INSN_START_WORDS]; uintptr_t host_pc = (uintptr_t)tb->tc.ptr; @@ -487,26 +464,8 @@ PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) return pd + (index & (V_L2_SIZE - 1)); } -static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, - PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc); - /* In user-mode page locks aren't used; mmap_lock is enough */ #ifdef CONFIG_USER_ONLY - -#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) - -static inline void page_lock(PageDesc *pd) -{ } - -static inline void page_unlock(PageDesc *pd) -{ } - -static inline void page_lock_tb(const TranslationBlock *tb) -{ } - -static inline void page_unlock_tb(const TranslationBlock *tb) -{ } - struct page_collection * page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) { @@ -555,8 +514,7 @@ static void page_unlock__debug(const PageDesc *pd) g_assert(removed); } -static void -do_assert_page_locked(const PageDesc *pd, const char *file, int line) +void do_assert_page_locked(const PageDesc *pd, const char *file, int line) { if (unlikely(!page_is_locked(pd))) { error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", @@ -565,8 +523,6 @@ do_assert_page_locked(const PageDesc *pd, const char *file, int line) } } -#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE__) - void assert_no_pages_locked(void) { ht_pages_locked_debug_init(); @@ -575,50 +531,23 @@ void assert_no_pages_locked(void) #else /* !CONFIG_DEBUG_TCG */ -#define assert_page_locked(pd) - -static inline void page_lock__debug(const PageDesc *pd) -{ -} - -static inline void page_unlock__debug(const PageDesc *pd) -{ -} +static inline void page_lock__debug(const PageDesc *pd) { } +static inline void page_unlock__debug(const PageDesc *pd) { } #endif /* CONFIG_DEBUG_TCG */ -static inline void page_lock(PageDesc *pd) +void page_lock(PageDesc *pd) { page_lock__debug(pd); qemu_spin_lock(&pd->lock); } -static inline void page_unlock(PageDesc *pd) +void page_unlock(PageDesc *pd) { qemu_spin_unlock(&pd->lock); page_unlock__debug(pd); } -/* lock the page(s) of a TB in the correct acquisition order */ -static inline void page_lock_tb(const TranslationBlock *tb) -{ - page_lock_pair(NULL, tb->page_addr[0], NULL, tb->page_addr[1], false); -} - -static inline void page_unlock_tb(const TranslationBlock *tb) -{ - PageDesc *p1 = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); - - page_unlock(p1); - if (unlikely(tb->page_addr[1] != -1)) { - PageDesc *p2 = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); - - if (p2 != p1) { - page_unlock(p2); - } - } -} - static inline struct page_entry * page_entry_new(PageDesc *pd, tb_page_addr_t index) { @@ -790,434 +719,6 @@ void page_collection_unlock(struct page_collection *set) #endif /* !CONFIG_USER_ONLY */ -static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, - PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc) -{ - PageDesc *p1, *p2; - tb_page_addr_t page1; - tb_page_addr_t page2; - - assert_memory_lock(); - g_assert(phys1 != -1); - - page1 = phys1 >> TARGET_PAGE_BITS; - page2 = phys2 >> TARGET_PAGE_BITS; - - p1 = page_find_alloc(page1, alloc); - if (ret_p1) { - *ret_p1 = p1; - } - if (likely(phys2 == -1)) { - page_lock(p1); - return; - } else if (page1 == page2) { - page_lock(p1); - if (ret_p2) { - *ret_p2 = p1; - } - return; - } - p2 = page_find_alloc(page2, alloc); - if (ret_p2) { - *ret_p2 = p2; - } - if (page1 < page2) { - page_lock(p1); - page_lock(p2); - } else { - page_lock(p2); - page_lock(p1); - } -} - -static bool tb_cmp(const void *ap, const void *bp) -{ - const TranslationBlock *a = ap; - const TranslationBlock *b = bp; - - return ((TARGET_TB_PCREL || tb_pc(a) == tb_pc(b)) && - a->cs_base == b->cs_base && - a->flags == b->flags && - (tb_cflags(a) & ~CF_INVALID) == (tb_cflags(b) & ~CF_INVALID) && - a->trace_vcpu_dstate == b->trace_vcpu_dstate && - a->page_addr[0] == b->page_addr[0] && - a->page_addr[1] == b->page_addr[1]); -} - -void tb_htable_init(void) -{ - unsigned int mode = QHT_MODE_AUTO_RESIZE; - - qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); -} - -/* Set to NULL all the 'first_tb' fields in all PageDescs. */ -static void page_flush_tb_1(int level, void **lp) -{ - int i; - - if (*lp == NULL) { - return; - } - if (level == 0) { - PageDesc *pd = *lp; - - for (i = 0; i < V_L2_SIZE; ++i) { - page_lock(&pd[i]); - pd[i].first_tb = (uintptr_t)NULL; - page_unlock(&pd[i]); - } - } else { - void **pp = *lp; - - for (i = 0; i < V_L2_SIZE; ++i) { - page_flush_tb_1(level - 1, pp + i); - } - } -} - -static void page_flush_tb(void) -{ - int i, l1_sz = v_l1_size; - - for (i = 0; i < l1_sz; i++) { - page_flush_tb_1(v_l2_levels, l1_map + i); - } -} - -/* flush all the translation blocks */ -static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count) -{ - bool did_flush = false; - - mmap_lock(); - /* If it is already been done on request of another CPU, - * just retry. - */ - if (tb_ctx.tb_flush_count != tb_flush_count.host_int) { - goto done; - } - did_flush = true; - - CPU_FOREACH(cpu) { - tcg_flush_jmp_cache(cpu); - } - - qht_reset_size(&tb_ctx.htable, CODE_GEN_HTABLE_SIZE); - page_flush_tb(); - - tcg_region_reset_all(); - /* XXX: flush processor icache at this point if cache flush is - expensive */ - qatomic_mb_set(&tb_ctx.tb_flush_count, tb_ctx.tb_flush_count + 1); - -done: - mmap_unlock(); - if (did_flush) { - qemu_plugin_flush_cb(); - } -} - -void tb_flush(CPUState *cpu) -{ - if (tcg_enabled()) { - unsigned tb_flush_count = qatomic_mb_read(&tb_ctx.tb_flush_count); - - if (cpu_in_exclusive_context(cpu)) { - do_tb_flush(cpu, RUN_ON_CPU_HOST_INT(tb_flush_count)); - } else { - async_safe_run_on_cpu(cpu, do_tb_flush, - RUN_ON_CPU_HOST_INT(tb_flush_count)); - } - } -} - -/* - * user-mode: call with mmap_lock held - * !user-mode: call with @pd->lock held - */ -static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) -{ - TranslationBlock *tb1; - uintptr_t *pprev; - unsigned int n1; - - assert_page_locked(pd); - pprev = &pd->first_tb; - PAGE_FOR_EACH_TB(pd, tb1, n1) { - if (tb1 == tb) { - *pprev = tb1->page_next[n1]; - return; - } - pprev = &tb1->page_next[n1]; - } - g_assert_not_reached(); -} - -/* remove @orig from its @n_orig-th jump list */ -static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_orig) -{ - uintptr_t ptr, ptr_locked; - TranslationBlock *dest; - TranslationBlock *tb; - uintptr_t *pprev; - int n; - - /* mark the LSB of jmp_dest[] so that no further jumps can be inserted */ - ptr = qatomic_or_fetch(&orig->jmp_dest[n_orig], 1); - dest = (TranslationBlock *)(ptr & ~1); - if (dest == NULL) { - return; - } - - qemu_spin_lock(&dest->jmp_lock); - /* - * While acquiring the lock, the jump might have been removed if the - * destination TB was invalidated; check again. - */ - ptr_locked = qatomic_read(&orig->jmp_dest[n_orig]); - if (ptr_locked != ptr) { - qemu_spin_unlock(&dest->jmp_lock); - /* - * The only possibility is that the jump was unlinked via - * tb_jump_unlink(dest). Seeing here another destination would be a bug, - * because we set the LSB above. - */ - g_assert(ptr_locked == 1 && dest->cflags & CF_INVALID); - return; - } - /* - * We first acquired the lock, and since the destination pointer matches, - * we know for sure that @orig is in the jmp list. - */ - pprev = &dest->jmp_list_head; - TB_FOR_EACH_JMP(dest, tb, n) { - if (tb == orig && n == n_orig) { - *pprev = tb->jmp_list_next[n]; - /* no need to set orig->jmp_dest[n]; setting the LSB was enough */ - qemu_spin_unlock(&dest->jmp_lock); - return; - } - pprev = &tb->jmp_list_next[n]; - } - g_assert_not_reached(); -} - -/* reset the jump entry 'n' of a TB so that it is not chained to - another TB */ -static inline void tb_reset_jump(TranslationBlock *tb, int n) -{ - uintptr_t addr = (uintptr_t)(tb->tc.ptr + tb->jmp_reset_offset[n]); - tb_set_jmp_target(tb, n, addr); -} - -/* remove any jumps to the TB */ -static inline void tb_jmp_unlink(TranslationBlock *dest) -{ - TranslationBlock *tb; - int n; - - qemu_spin_lock(&dest->jmp_lock); - - TB_FOR_EACH_JMP(dest, tb, n) { - tb_reset_jump(tb, n); - qatomic_and(&tb->jmp_dest[n], (uintptr_t)NULL | 1); - /* No need to clear the list entry; setting the dest ptr is enough */ - } - dest->jmp_list_head = (uintptr_t)NULL; - - qemu_spin_unlock(&dest->jmp_lock); -} - -static void tb_jmp_cache_inval_tb(TranslationBlock *tb) -{ - CPUState *cpu; - - if (TARGET_TB_PCREL) { - /* A TB may be at any virtual address */ - CPU_FOREACH(cpu) { - tcg_flush_jmp_cache(cpu); - } - } else { - uint32_t h = tb_jmp_cache_hash_func(tb_pc(tb)); - - CPU_FOREACH(cpu) { - CPUJumpCache *jc = cpu->tb_jmp_cache; - - if (qatomic_read(&jc->array[h].tb) == tb) { - qatomic_set(&jc->array[h].tb, NULL); - } - } - } -} - -/* - * In user-mode, call with mmap_lock held. - * In !user-mode, if @rm_from_page_list is set, call with the TB's pages' - * locks held. - */ -static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list) -{ - PageDesc *p; - uint32_t h; - tb_page_addr_t phys_pc; - uint32_t orig_cflags = tb_cflags(tb); - - assert_memory_lock(); - - /* make sure no further incoming jumps will be chained to this TB */ - qemu_spin_lock(&tb->jmp_lock); - qatomic_set(&tb->cflags, tb->cflags | CF_INVALID); - qemu_spin_unlock(&tb->jmp_lock); - - /* remove the TB from the hash list */ - phys_pc = tb->page_addr[0]; - h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)), - tb->flags, orig_cflags, tb->trace_vcpu_dstate); - if (!qht_remove(&tb_ctx.htable, tb, h)) { - return; - } - - /* remove the TB from the page list */ - if (rm_from_page_list) { - p = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - if (tb->page_addr[1] != -1) { - p = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - } - } - - /* remove the TB from the hash list */ - tb_jmp_cache_inval_tb(tb); - - /* suppress this TB from the two jump lists */ - tb_remove_from_jmp_list(tb, 0); - tb_remove_from_jmp_list(tb, 1); - - /* suppress any remaining jumps to this TB */ - tb_jmp_unlink(tb); - - qatomic_set(&tb_ctx.tb_phys_invalidate_count, - tb_ctx.tb_phys_invalidate_count + 1); -} - -static void tb_phys_invalidate__locked(TranslationBlock *tb) -{ - qemu_thread_jit_write(); - do_tb_phys_invalidate(tb, true); - qemu_thread_jit_execute(); -} - -/* invalidate one TB - * - * Called with mmap_lock held in user-mode. - */ -void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr) -{ - if (page_addr == -1 && tb->page_addr[0] != -1) { - page_lock_tb(tb); - do_tb_phys_invalidate(tb, true); - page_unlock_tb(tb); - } else { - do_tb_phys_invalidate(tb, false); - } -} - -/* add the tb in the target page and protect it if necessary - * - * Called with mmap_lock held for user-mode emulation. - * Called with @p->lock held in !user-mode. - */ -static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, - unsigned int n, tb_page_addr_t page_addr) -{ -#ifndef CONFIG_USER_ONLY - bool page_already_protected; -#endif - - assert_page_locked(p); - - tb->page_addr[n] = page_addr; - tb->page_next[n] = p->first_tb; -#ifndef CONFIG_USER_ONLY - page_already_protected = p->first_tb != (uintptr_t)NULL; -#endif - p->first_tb = (uintptr_t)tb | n; - -#if defined(CONFIG_USER_ONLY) - /* translator_loop() must have made all TB pages non-writable */ - assert(!(p->flags & PAGE_WRITE)); -#else - /* if some code is already present, then the pages are already - protected. So we handle the case where only the first TB is - allocated in a physical page */ - if (!page_already_protected) { - tlb_protect_code(page_addr); - } -#endif -} - -/* - * Add a new TB and link it to the physical page tables. phys_page2 is - * (-1) to indicate that only one page contains the TB. - * - * Called with mmap_lock held for user-mode emulation. - * - * Returns a pointer @tb, or a pointer to an existing TB that matches @tb. - * Note that in !user-mode, another thread might have already added a TB - * for the same block of guest code that @tb corresponds to. In that case, - * the caller should discard the original @tb, and use instead the returned TB. - */ -static TranslationBlock * -tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, - tb_page_addr_t phys_page2) -{ - PageDesc *p; - PageDesc *p2 = NULL; - void *existing_tb = NULL; - uint32_t h; - - assert_memory_lock(); - tcg_debug_assert(!(tb->cflags & CF_INVALID)); - - /* - * Add the TB to the page list, acquiring first the pages's locks. - * We keep the locks held until after inserting the TB in the hash table, - * so that if the insertion fails we know for sure that the TBs are still - * in the page descriptors. - * Note that inserting into the hash table first isn't an option, since - * we can only insert TBs that are fully initialized. - */ - page_lock_pair(&p, phys_pc, &p2, phys_page2, true); - tb_page_add(p, tb, 0, phys_pc); - if (p2) { - tb_page_add(p2, tb, 1, phys_page2); - } else { - tb->page_addr[1] = -1; - } - - /* add in the hash table */ - h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)), - tb->flags, tb->cflags, tb->trace_vcpu_dstate); - qht_insert(&tb_ctx.htable, tb, h, &existing_tb); - - /* remove TB from the page(s) if we couldn't insert it */ - if (unlikely(existing_tb)) { - tb_page_remove(p, tb); - if (p2) { - tb_page_remove(p2, tb); - } - tb = existing_tb; - } - - if (p2 && p2 != p) { - page_unlock(p2); - } - page_unlock(p); - return tb; -} - /* Called with mmap_lock held for user mode emulation. */ TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, @@ -1497,251 +998,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu, return tb; } -/* - * @p must be non-NULL. - * user-mode: call with mmap_lock held. - * !user-mode: call with all @pages locked. - */ -static void -tb_invalidate_phys_page_range__locked(struct page_collection *pages, - PageDesc *p, tb_page_addr_t start, - tb_page_addr_t end, - uintptr_t retaddr) -{ - TranslationBlock *tb; - tb_page_addr_t tb_start, tb_end; - int n; -#ifdef TARGET_HAS_PRECISE_SMC - CPUState *cpu = current_cpu; - CPUArchState *env = NULL; - bool current_tb_not_found = retaddr != 0; - bool current_tb_modified = false; - TranslationBlock *current_tb = NULL; - target_ulong current_pc = 0; - target_ulong current_cs_base = 0; - uint32_t current_flags = 0; -#endif /* TARGET_HAS_PRECISE_SMC */ - - assert_page_locked(p); - -#if defined(TARGET_HAS_PRECISE_SMC) - if (cpu != NULL) { - env = cpu->env_ptr; - } -#endif - - /* we remove all the TBs in the range [start, end[ */ - /* XXX: see if in some cases it could be faster to invalidate all - the code */ - PAGE_FOR_EACH_TB(p, tb, n) { - assert_page_locked(p); - /* NOTE: this is subtle as a TB may span two physical pages */ - if (n == 0) { - /* NOTE: tb_end may be after the end of the page, but - it is not a problem */ - tb_start = tb->page_addr[0]; - tb_end = tb_start + tb->size; - } else { - tb_start = tb->page_addr[1]; - tb_end = tb_start + ((tb->page_addr[0] + tb->size) - & ~TARGET_PAGE_MASK); - } - if (!(tb_end <= start || tb_start >= end)) { -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_not_found) { - current_tb_not_found = false; - /* now we have a real cpu fault */ - current_tb = tcg_tb_lookup(retaddr); - } - if (current_tb == tb && - (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) { - /* - * If we are modifying the current TB, we must stop - * its execution. We could be more precise by checking - * that the modification is after the current PC, but it - * would require a specialized function to partially - * restore the CPU state. - */ - current_tb_modified = true; - cpu_restore_state_from_tb(cpu, current_tb, retaddr, true); - cpu_get_tb_cpu_state(env, ¤t_pc, ¤t_cs_base, - ¤t_flags); - } -#endif /* TARGET_HAS_PRECISE_SMC */ - tb_phys_invalidate__locked(tb); - } - } -#if !defined(CONFIG_USER_ONLY) - /* if no code remaining, no need to continue to use slow writes */ - if (!p->first_tb) { - tlb_unprotect_code(start); - } -#endif -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_modified) { - page_collection_unlock(pages); - /* Force execution of one insn next time. */ - cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu); - mmap_unlock(); - cpu_loop_exit_noexc(cpu); - } -#endif -} - -/* - * Invalidate all TBs which intersect with the target physical address range - * [start;end[. NOTE: start and end must refer to the *same* physical page. - * 'is_cpu_write_access' should be true if called from a real cpu write - * access: the virtual CPU will exit the current TB if code is modified inside - * this TB. - * - * Called with mmap_lock held for user-mode emulation - */ -void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end) -{ - struct page_collection *pages; - PageDesc *p; - - assert_memory_lock(); - - p = page_find(start >> TARGET_PAGE_BITS); - if (p == NULL) { - return; - } - pages = page_collection_lock(start, end); - tb_invalidate_phys_page_range__locked(pages, p, start, end, 0); - page_collection_unlock(pages); -} - -/* - * Invalidate all TBs which intersect with the target physical address range - * [start;end[. NOTE: start and end may refer to *different* physical pages. - * 'is_cpu_write_access' should be true if called from a real cpu write - * access: the virtual CPU will exit the current TB if code is modified inside - * this TB. - * - * Called with mmap_lock held for user-mode emulation. - */ -#ifdef CONFIG_SOFTMMU -void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end) -#else -void tb_invalidate_phys_range(target_ulong start, target_ulong end) -#endif -{ - struct page_collection *pages; - tb_page_addr_t next; - - assert_memory_lock(); - - pages = page_collection_lock(start, end); - for (next = (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; - start < end; - start = next, next += TARGET_PAGE_SIZE) { - PageDesc *pd = page_find(start >> TARGET_PAGE_BITS); - tb_page_addr_t bound = MIN(next, end); - - if (pd == NULL) { - continue; - } - tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0); - } - page_collection_unlock(pages); -} - -#ifdef CONFIG_SOFTMMU -/* len must be <= 8 and start must be a multiple of len. - * Called via softmmu_template.h when code areas are written to with - * iothread mutex not held. - * - * Call with all @pages in the range [@start, @start + len[ locked. - */ -void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr) -{ - PageDesc *p; - - assert_memory_lock(); - - p = page_find(start >> TARGET_PAGE_BITS); - if (!p) { - return; - } - - assert_page_locked(p); - tb_invalidate_phys_page_range__locked(pages, p, start, start + len, - retaddr); -} -#else -/* Called with mmap_lock held. If pc is not 0 then it indicates the - * host PC of the faulting store instruction that caused this invalidate. - * Returns true if the caller needs to abort execution of the current - * TB (because it was modified by this store and the guest CPU has - * precise-SMC semantics). - */ -static bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc) -{ - TranslationBlock *tb; - PageDesc *p; - int n; -#ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb = NULL; - CPUState *cpu = current_cpu; - CPUArchState *env = NULL; - int current_tb_modified = 0; - target_ulong current_pc = 0; - target_ulong current_cs_base = 0; - uint32_t current_flags = 0; -#endif - - assert_memory_lock(); - - addr &= TARGET_PAGE_MASK; - p = page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - return false; - } - -#ifdef TARGET_HAS_PRECISE_SMC - if (p->first_tb && pc != 0) { - current_tb = tcg_tb_lookup(pc); - } - if (cpu != NULL) { - env = cpu->env_ptr; - } -#endif - assert_page_locked(p); - PAGE_FOR_EACH_TB(p, tb, n) { -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb == tb && - (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) { - /* If we are modifying the current TB, we must stop - its execution. We could be more precise by checking - that the modification is after the current PC, but it - would require a specialized function to partially - restore the CPU state */ - - current_tb_modified = 1; - cpu_restore_state_from_tb(cpu, current_tb, pc, true); - cpu_get_tb_cpu_state(env, ¤t_pc, ¤t_cs_base, - ¤t_flags); - } -#endif /* TARGET_HAS_PRECISE_SMC */ - tb_phys_invalidate(tb, addr); - } - p->first_tb = (uintptr_t)NULL; -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_modified) { - /* Force execution of one insn next time. */ - cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu); - return true; - } -#endif - - return false; -} -#endif - /* user-mode: call with mmap_lock held */ void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr) { diff --git a/accel/tcg/meson.build b/accel/tcg/meson.build index 7a0a79d731..75e1dffb4d 100644 --- a/accel/tcg/meson.build +++ b/accel/tcg/meson.build @@ -3,6 +3,7 @@ tcg_ss.add(files( 'tcg-all.c', 'cpu-exec-common.c', 'cpu-exec.c', + 'tb-maint.c', 'tcg-runtime-gvec.c', 'tcg-runtime.c', 'translate-all.c', From patchwork Thu Oct 6 03:10:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612842 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1189884pvb; Wed, 5 Oct 2022 20:12:45 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6/xef4V4VlpFPAw5ziOgNWoKPiZu/TB3RxTipD8ZYClbPD/HpaFVmMcWCAspFqD1F/D7XA X-Received: by 2002:a37:63cf:0:b0:6e3:367a:43ae with SMTP id x198-20020a3763cf000000b006e3367a43aemr1879415qkb.59.1665025965167; Wed, 05 Oct 2022 20:12:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665025965; cv=none; d=google.com; s=arc-20160816; b=xINZqA1tpfI37heYQ3XkuIlbKBAGyANj8HJWUp4VcDzORUvYJFghVtt1zGkRwWnQKE hx35DDV2D6GIib7v/m607jbmAzLVgZSHVDhYoWGjYAsnKUnYH2U2XVZBcgWrSThHEfbK 356M+c5MpRGkRqj7139WzwFpW+it7sWUp7mf0s9aDZCsu9dOYLpHTsRGviZNBUir6ynX CEeH3Xpx8EDjiJNqC3C3u7Q96cWYlyfQRl/Yfp9a1Tl5T9fn1ixcIA0/5911eY7IMbG1 yOD2gB5OKAX4UjfTrKYR52opIe+VpOIEnbUDnIeLlpCd5x+W8+M2JeUba8/R4HovSOTt v9Sw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=XKi0KiOHBCUPDLzrT9NwRCd9j8+52HKhkr0TSVC/nhM=; b=SPHNhA4byH7HZ4xVGB2kV/THrzgL5PQYFwb5b4WH3tZjHt8Ru72SuFvvpXtsweg955 Q41t1FLEGPbQ9IQKsfIko2BLw/yPomkAkfijIgrW8Nd3sxrbLNNO0qQR3lEYNJb2T2sY 0jFERR0AdHFn+sRkj+AEvXJ6tlzTR0AShVkw0R3t61vqfU/A3VNsPq4k91hCfHMKgorZ Pu6ufFJmSLAI2Kkkw2WrIBhNTHij0tGb82QNRmnigkiINZmwIQa9QXKHhqPXMJRRWfSB JpjdAMTBlOyMgkMJ0g6VpTB00eYQaMKh25wEsHaIzOQilP/LclCaz51ZI2R4KUevqb9N T3yQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Dc++RPn/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id ke21-20020a056214301500b004b17c3db19fsi2748747qvb.173.2022.10.05.20.12.45 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:12:45 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Dc++RPn/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:42150 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHJU-0005PC-HX for patch@linaro.org; Wed, 05 Oct 2022 23:12:44 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48120) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIE-0005Ll-Pv for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:26 -0400 Received: from mail-pg1-x52c.google.com ([2607:f8b0:4864:20::52c]:33570) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHID-0006wm-76 for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:26 -0400 Received: by mail-pg1-x52c.google.com with SMTP id f193so775413pgc.0 for ; Wed, 05 Oct 2022 20:11:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=XKi0KiOHBCUPDLzrT9NwRCd9j8+52HKhkr0TSVC/nhM=; b=Dc++RPn/lkSWr2vwfSJqXInovYLrJ9aoMh/0ptHmXE1Jd7RBHfGyfTRq6UDYIral5b fRCWCG2tOUeleFVoiUhog2MhR4RlSUeMKrbbXgx+ZH8w3C6mgmLWsRTlEJldOnkuKWXC K/Im9+Ef6UoaIMxim6Onrf+6wJjE/aiL2h6ZX9z7zl5s/NLeMlGtfeCyTZ88KCXJnvl1 01D/g1x91Ln8H3pYOYDeHUu/8GZlvcuyDBEjodk1SOXXvr8HsQ9GncDz8rHw8G+nE1f2 aa+27V1iVpk7AIHpy+zuGjEr/EFOsLqdUllBtd8mVs8zy+ZeXmu6sHxL0VdqIPXypHXf GH7A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=XKi0KiOHBCUPDLzrT9NwRCd9j8+52HKhkr0TSVC/nhM=; b=GDISoJlvrcLOi53ttEBHHbA7YpRF1sdFl+HeLLL+C6udmfJPBxzz1gMTxPbd3g+OMf 1YFxoJck6OiuQIkD7jOCQojvGExExQHWqy1ucq9EZyK2UDMxHXL+ZJf55GP60d0woWIF 0GR3yp9YH3LiUyWa+GpVwvzOz8yD77bYyiijxxMEN2tTbxN4hY4vgYGbMNQmPXZ6XZP/ mYsQITk99ZMdFncbsY0FAn6dlpj0peHtN6gfs/cBP1nYoKYK0kHehsIQbQJdHVb9aaCM OKOhxADI3bBqfDYajACdLGdFKhtNuvzaV/yornOxrsDgSTUChNEG2YdPnMl42DCXJEEM ylUA== X-Gm-Message-State: ACrzQf1UXTMQUG1sgIm3EBXBFekO0WI1msVW2Gi28AS3xnahSMWzbaAK VC40vzfQOM/GqddQFh/HbV6Krztg9pfBmQ== X-Received: by 2002:a63:da12:0:b0:43a:18ce:f959 with SMTP id c18-20020a63da12000000b0043a18cef959mr2521841pgh.386.1665025882776; Wed, 05 Oct 2022 20:11:22 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:22 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 06/24] accel/tcg: Move assert_no_pages_locked to internal.h Date: Wed, 5 Oct 2022 20:10:55 -0700 Message-Id: <20221006031113.1139454-7-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::52c; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x52c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" There are no users outside of accel/tcg; this function does not need to be defined in exec-all.h. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/internal.h | 5 +++++ include/exec/exec-all.h | 8 -------- 2 files changed, 5 insertions(+), 8 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index a77b110b78..1a704ee14f 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -90,6 +90,11 @@ void do_assert_page_locked(const PageDesc *pd, const char *file, int line); void page_lock(PageDesc *pd); void page_unlock(PageDesc *pd); #endif +#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) +void assert_no_pages_locked(void); +#else +static inline void assert_no_pages_locked(void) { } +#endif TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, uint32_t flags, diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index e5f8b224a5..b5bde1b56a 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -642,14 +642,6 @@ extern __thread uintptr_t tci_tb_ptr; smaller than 4 bytes, so we don't worry about special-casing this. */ #define GETPC_ADJ 2 -#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) -void assert_no_pages_locked(void); -#else -static inline void assert_no_pages_locked(void) -{ -} -#endif - #if !defined(CONFIG_USER_ONLY) /** From patchwork Thu Oct 6 03:10:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612845 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1191102pvb; Wed, 5 Oct 2022 20:15:37 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6BgecLEHnf3+4lHlKQsJQnFn4LbSGViZV+labL8p/lN85szBBR3l+mSsGTD0x8txx/Mcrh X-Received: by 2002:a05:622a:1489:b0:38b:4ffe:1e14 with SMTP id t9-20020a05622a148900b0038b4ffe1e14mr1969526qtx.3.1665026137569; Wed, 05 Oct 2022 20:15:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026137; cv=none; d=google.com; s=arc-20160816; b=APzigQW8Wutja9ZH4C2Y7XOIE7ThaOI1DHV6f3Ycqd0z6cLDZ13LeIuk/qfzy5z9Po z+syQHvNabrURumI1Tp6CQOGcCP3ccCsBu6h4ZoFkvJNkNxy/21xp2hRy7mvSD8vXkOt 3kjlfqKzJE0Vym5+/vGRYwWZc6X4UIx0qxH301dhuokwKCIusuCVhy9NBgxOOjLVPep5 TMO7tttg70nZq9KfFoKEDwhTAE9TzVik0a0qu3xpMife098vNwcSax+Gll0gRr3dKs8Z nUzoggKfSXKjGcK2z+G6QPE/0jKu9H8VOy+EiXzOy8SG+zJv+E0uZCqVhe6P/8MZkKHW MkSw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=DRFC50U9dyvtXdLShq+XHevnJofYGzZnrDEQDuNtnAo=; b=uHeXEtc5jFFkIwD+qzw85ZXOS5roB4uy7XsDKohjAtLvk5/Ro/nC8VsZpGDXSkr2yf LKD2NEgEbN9xwE7Ge0l70pSWUk0OWJiOTdepNP+FbBJ8nzmEmR2ZeTFcXYF6b6qY2Mx5 2xQDqdgTxau0pzRPJIFeEAZshXLgU80VHqP5LMvoZp+lNIfIQe6KJ+OgmuBOb1wF1J0V KyJCy9d0TAW1SlkSLQ1AY27K8LFOl3Re0U7pGOeHbIj/Ifhikl+yVxiD6xogL3ypVV+Y +P39lNiMx9M64xxzdr3pyvzsumL2BFeHdzbrqc6Mt+RcY5X7tUlCX6S8VfM2Y6IkkBA/ +v+Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QZShy2ew; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q12-20020ad4574c000000b004a8ee66c84asi2855132qvx.169.2022.10.05.20.15.37 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:15:37 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QZShy2ew; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59414 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHMH-0002za-20 for patch@linaro.org; Wed, 05 Oct 2022 23:15:37 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48122) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIG-0005Q7-7P for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:28 -0400 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]:42802) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIE-0006x0-0A for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:27 -0400 Received: by mail-pl1-x633.google.com with SMTP id c24so519264pls.9 for ; Wed, 05 Oct 2022 20:11:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=DRFC50U9dyvtXdLShq+XHevnJofYGzZnrDEQDuNtnAo=; b=QZShy2ewT/j4hEm1ArJxOYWw9aQNrWrd1yrcRNj602P9+hMPkE1LoJ60ybcDanDT8d Y5gUSsGsrEI3Fal0Rmyngm9qt7dFsn/WugGrXAZAgLooZ+aQ09+fuZ/iLSWm9UepWPOj fy4+EJv2wX2qeqKimQXIa0i1211EFb9kxjF597ivv4C1MXy/XkgdBe0SonmJyVgdQBM6 rc0tZiKosOAzc8PWC90X8WAM6864ZocyLxUvV+GZS6e7SII7qSIVs5eQ1mMDQD7Mhck1 rP5qQzy2HfBrMiMg24nHs7OrZfLIKHnaoP4L/KOvwSmHxegl597+hepXNkfJRPetR1uU o/TQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=DRFC50U9dyvtXdLShq+XHevnJofYGzZnrDEQDuNtnAo=; b=tuqX+rYNy56DVEfbMeiYgrvOf6gX/xfusohpr0FhPP9NwkMM5UtGbdCOxVh1iY3AFm NP/8OqWbvGp06b/Hqoq/nUF3UFBKxMsW1c8V4sAAvoF8mimSz1+p6xIxV0XWwcgSj6Yi MvKNlT3QF1WVeZh/Fl79n1FII5vi7NUzZIVGBjU2J8UlrueFC9U6MtMh6IQ8fDLu/1x4 xcFUSeY5xSK6nG3WSiPW0XtGmdYQt0hDeigqm8mi4VI5iFzDdYbIJiFjGRtvzERJPvO2 z7W6VskXQpnv8UNC4xjzjPEHKGZxPObn+T3mV1xMiMBS509Q5Q4Iztk65EnFjay841Ys VUug== X-Gm-Message-State: ACrzQf0tSIP3mbGPnzOuknPmkpZxTEhWSphtT+atyN9OLnMTsEh1Pg7W tIWVYS3DNHJmOFC1GUBeaSUQcKupws91bg== X-Received: by 2002:a17:90a:6003:b0:20a:6fa6:b5b with SMTP id y3-20020a17090a600300b0020a6fa60b5bmr2853282pji.21.1665025884162; Wed, 05 Oct 2022 20:11:24 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:23 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 07/24] accel/tcg: Drop cpu_get_tb_cpu_state from TARGET_HAS_PRECISE_SMC Date: Wed, 5 Oct 2022 20:10:56 -0700 Message-Id: <20221006031113.1139454-8-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::633; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x633.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" The results of the calls to cpu_get_tb_cpu_state, current_{pc,cs_base,flags}, are not used. In tb_invalidate_phys_page, use bool for current_tb_modified. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/tb-maint.c | 25 ++----------------------- 1 file changed, 2 insertions(+), 23 deletions(-) diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 66c1900ae6..9af5cb49e0 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -502,23 +502,13 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, int n; #ifdef TARGET_HAS_PRECISE_SMC CPUState *cpu = current_cpu; - CPUArchState *env = NULL; bool current_tb_not_found = retaddr != 0; bool current_tb_modified = false; TranslationBlock *current_tb = NULL; - target_ulong current_pc = 0; - target_ulong current_cs_base = 0; - uint32_t current_flags = 0; #endif /* TARGET_HAS_PRECISE_SMC */ assert_page_locked(p); -#if defined(TARGET_HAS_PRECISE_SMC) - if (cpu != NULL) { - env = cpu->env_ptr; - } -#endif - /* * We remove all the TBs in the range [start, end[. * XXX: see if in some cases it could be faster to invalidate all the code @@ -554,8 +544,6 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, */ current_tb_modified = true; cpu_restore_state_from_tb(cpu, current_tb, retaddr, true); - cpu_get_tb_cpu_state(env, ¤t_pc, ¤t_cs_base, - ¤t_flags); } #endif /* TARGET_HAS_PRECISE_SMC */ tb_phys_invalidate__locked(tb); @@ -679,11 +667,7 @@ bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc) #ifdef TARGET_HAS_PRECISE_SMC TranslationBlock *current_tb = NULL; CPUState *cpu = current_cpu; - CPUArchState *env = NULL; - int current_tb_modified = 0; - target_ulong current_pc = 0; - target_ulong current_cs_base = 0; - uint32_t current_flags = 0; + bool current_tb_modified = false; #endif assert_memory_lock(); @@ -698,9 +682,6 @@ bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc) if (p->first_tb && pc != 0) { current_tb = tcg_tb_lookup(pc); } - if (cpu != NULL) { - env = cpu->env_ptr; - } #endif assert_page_locked(p); PAGE_FOR_EACH_TB(p, tb, n) { @@ -713,10 +694,8 @@ bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc) * after the current PC, but it would require a specialized * function to partially restore the CPU state. */ - current_tb_modified = 1; + current_tb_modified = true; cpu_restore_state_from_tb(cpu, current_tb, pc, true); - cpu_get_tb_cpu_state(env, ¤t_pc, ¤t_cs_base, - ¤t_flags); } #endif /* TARGET_HAS_PRECISE_SMC */ tb_phys_invalidate(tb, addr); From patchwork Thu Oct 6 03:10:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612848 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1192799pvb; Wed, 5 Oct 2022 20:20:20 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4gMigMBJ3i8/x1viiv+0rCXY4HVe1IgFlIH48Hfw37ra/z8CJNuGm2KrVoYNcaYSQ7P+Or X-Received: by 2002:a05:622a:296:b0:35d:52ee:2af3 with SMTP id z22-20020a05622a029600b0035d52ee2af3mr2100830qtw.72.1665026419880; Wed, 05 Oct 2022 20:20:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026419; cv=none; d=google.com; s=arc-20160816; b=eT2rPvWEa6MDrCS0owqS6PhVpkqEzMOMgxqZ07uxULmRcd2ts5OXngOCz7gpMn8XyI 1Y+2Rfypf8VhYZbdC59FIjmCR00vx1uYgQGY7cikdSScGx+c8d4H0bZPM2fPui8/r9j0 VOQvJ92EYr9dxs3jZgFvYs9rveUriSsv8UZhV0ffpBRbx1IIuvpbL9MzPvveUGpsOts/ MOrvPQUN23XZo6QxACgNgge9dIreqwUfzyDBplGsCPlT+oxdrwT/syukIjh5aXnSIf/4 FGbreoTCBXq//J1xxHFSNWfNQj42D5p0FW+glF1flm0IIc9KMuPsYlpf0hFPz4x54xyK Hszg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=l6SGav8RwfN0SUmZjlh737YnehkKC7vRUo7OZ2eZuSg=; b=g3zSyY6523XelaNjXgRzHzS63HiBf18qnKZa/vTdS189v3Sl0V8quWI7G1efMLkxdv Rtt3zbWL6L3DCUaWyNLH0uIpkWLqETKrFTSoDHIK6+woCK1U6ve5+a7W0HVY5QxeY157 7cy5VMmKjSk0IEon7MmX9KslqfmgT6KXJzE92Fk5++f+DzpuUqewadYeaPk2d+rE83Go oHhO5tdMCW81TadXErk/yYwbfTqPRYdHVb5PHZIRdLsfYD4nmCzm9YomottraGYjrZPu SSspRoXyNK3I8OXX7a7pOQtpYda8gnt7YWJU11ytN2req39lEvO0A4OANn8+LSNVL0RA wxcg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZCGKacZF; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id bk9-20020a05620a1a0900b006b94a6bf76esi7031751qkb.239.2022.10.05.20.20.19 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:20:19 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ZCGKacZF; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:52474 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHQp-0000Dd-Bv for patch@linaro.org; Wed, 05 Oct 2022 23:20:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48124) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIG-0005Ri-Ka for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:28 -0400 Received: from mail-pl1-x62c.google.com ([2607:f8b0:4864:20::62c]:37501) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIE-0006xB-Th for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:28 -0400 Received: by mail-pl1-x62c.google.com with SMTP id d24so545193pls.4 for ; Wed, 05 Oct 2022 20:11:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=l6SGav8RwfN0SUmZjlh737YnehkKC7vRUo7OZ2eZuSg=; b=ZCGKacZFlV0nBNaXAAxZOkmNJDH1XuNpmYDaZWSm937Tzz86KQuWHqtCVxWtAbl/1m LrWYcG1pUrMQrRYQjm8AmX8ZD+9swhLAADQSA5LxEyzp21RamkWQnAS1oW0+VRWKvn44 JNy+LvhYY4k0rSiPCBAQeW/Z7ug7q3qtWRh5hehFsc2U1bB/o/RSEqdAUJBbYsy3FO4f lwRPDlRIDfKfG7KhbzK5CCd/QN+b2T9VA7I9hgkjmkDM7A1cnOu0ua6ELEkIs48qr2j/ D9axU++FVh1G1zuROgaPZIMU1a3QZcjkEWTkXid0NJjwhQuecSAHb4BOf9Erz+AErX4Y zomA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=l6SGav8RwfN0SUmZjlh737YnehkKC7vRUo7OZ2eZuSg=; b=s0/wUsj2WZYi6AwkCCfhrSXAgyY4teiCXzj9PRhQ2xRptuE2huzrY4BpRuBRleZ+E2 qmBN9Q9qzUb5ZLxxHmtWEINTJZLKdEaLLuQSL7cAoI0jOEb5pFgb/Vpxdo5RX33nN7qc yQx4l4BoLx3hFKMGWD32h979yFAym8f2QepjEXYL/VrZHuGoWmoAWwrwUEGlxHukCaiu RVSRYA2H44Xod3rnItTH9pIMH/+D7iUivFO3ZP1JTemv8vaHinrFBf1/G/G72fIj6zl2 VwnClrK4aDCB/WY4kc/P5/yPX0K7mkxJLODD0r2xwWY7/skKAwmrh243J6WV1JaZ2t57 53xA== X-Gm-Message-State: ACrzQf3akMQphT1Xhtj5xE3wsUWUepTTFUvbuhEfLf0B6yAW6MbH8r7P WpbmvOYir1sJJY2Ml2oty3IfQVWXx5gZPw== X-Received: by 2002:a17:90b:1b52:b0:202:c1e3:7e9f with SMTP id nv18-20020a17090b1b5200b00202c1e37e9fmr8276005pjb.68.1665025885579; Wed, 05 Oct 2022 20:11:25 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:25 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 08/24] accel/tcg: Remove duplicate store to tb->page_addr[] Date: Wed, 5 Oct 2022 20:10:57 -0700 Message-Id: <20221006031113.1139454-9-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62c; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" When we added the fast path, we initialized page_addr[] early. These stores in and around tb_page_add() are redundant; remove them. Fixes: 50627f1b7b1 ("accel/tcg: Add fast path for translator_ld*") Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/tb-maint.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 9af5cb49e0..7f4e1e1299 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -405,7 +405,6 @@ static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, assert_page_locked(p); - tb->page_addr[n] = page_addr; tb->page_next[n] = p->first_tb; #ifndef CONFIG_USER_ONLY page_already_protected = p->first_tb != (uintptr_t)NULL; @@ -461,8 +460,6 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, tb_page_add(p, tb, 0, phys_pc); if (p2) { tb_page_add(p2, tb, 1, phys_page2); - } else { - tb->page_addr[1] = -1; } /* add in the hash table */ From patchwork Thu Oct 6 03:10:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612850 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1193240pvb; Wed, 5 Oct 2022 20:21:26 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5rn6tUt80mrvX+4/BaIE+ugm+v0OhuxXd1VwRpXgRD3L0pI7hKIwZvpXZ8O8FrRrQSCLQD X-Received: by 2002:ae9:ea07:0:b0:6cb:be2e:a183 with SMTP id f7-20020ae9ea07000000b006cbbe2ea183mr1915546qkg.586.1665026486544; Wed, 05 Oct 2022 20:21:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026486; cv=none; d=google.com; s=arc-20160816; b=q19fPICsVOiSvvgSz71QIJLkLCl+qIIy4y++vew/nT0WmN85HiHoM/5qSH7dz5+SDS aGWhlQcCKBj7eCKiX+LoEFyfacG2irU0cVNjE+8CtikhIhOxfgW0ylEc7V9JBmptq+OP CmSdUVoB/dUKc7sZYlPqT8AwMghMGHbWcWiCqRV6viVAg/KEasaCcajnu+8Hbc2bxZZH Qnv0BNtdQ31Ermr1+HBpNQ3qWSaYQFl4889+XQem54b5vFCN6VqwZhBqbT6p/Oi3241X +KjvU4ze+dvN1uJQcy3UF4OMnhDLaiTABQ2ZBlTKIACwhwEYLYPWDMdKBBhuJV7GEzta ualg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=YOGYOig0+dCLVlylApGJJjbo/c533HaL1b5I4rINT1c=; b=QNRzyKhEOiOYOrLB9lWy5nC7ZnI34tl+n96N0thPJGTeD3nNOUhmCYU83lKSX+Z5W9 Y1nAD8DxrA9PTMnvmTur1hQrmC1zbdYyuAbNVHBYKYL0WEaW5qynux8DWo4d8dbidZwT UrhHdJw5OdaJlzgE4ZEyGceoB7zfHQTWsmHrFBIqrK9D4eszcDujRsO1sAlj/ThVsaSK 4Q8dV6PeiLFOmmqmEQaCGD3c5CbdkYRwyBOY55daI4DPTToE7WJWuDQDpj9ODgmvUYuK Q2eGJNnSc7GMi6hpCSRuABXxEer5ZjJEHEdsrK8RsIWPORH3qhHx3xo3w0jtVHkJkc/g GvNQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OJYyza5C; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r1-20020a056214212100b004ad26a4e7e2si3012006qvc.74.2022.10.05.20.21.26 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:21:26 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OJYyza5C; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:36540 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHRt-0001kx-V9 for patch@linaro.org; Wed, 05 Oct 2022 23:21:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48128) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHII-0005YD-9W for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:30 -0400 Received: from mail-pl1-x629.google.com ([2607:f8b0:4864:20::629]:45783) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIG-0006xM-3j for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:30 -0400 Received: by mail-pl1-x629.google.com with SMTP id u24so505898plq.12 for ; Wed, 05 Oct 2022 20:11:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=YOGYOig0+dCLVlylApGJJjbo/c533HaL1b5I4rINT1c=; b=OJYyza5CXXjCorkRAa+le1p4YE2hvIGvhJcaYNaJfWpoRlk08lC6fFyr6QICQL6cKI FryEiu3yZgf53yRcIEaH/xXbzQ/u7ALLcVXLH1hdZ7BuujcZv6RnTGQO6H8PCUknR66V 5ne67dLhN1gLz8YKCRJxUtizLWpNtWTETwGEzcWRZ90gG+MdSGymtSwpa49pdzXhpC2g 9pKGKgsFyzyh2wppx08npu1Z64JEL8QojZzS399uwZ5jENgZRksYY5rDZS2ghi0whhD0 ObRkveBYVgE6Or3g2AAxYO/YmlvT8Fd16AZaEogNbTgE7pcXwZN+kYycKTLcY7O6orIb eYlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=YOGYOig0+dCLVlylApGJJjbo/c533HaL1b5I4rINT1c=; b=j9ixt1w9lJ2Axufi/a4DEiS01iuhsSbX2TahuVQS/RUsaSdUbjZHICBSEycyfW29uM sLUFtMh/WkBwgihG3Q4S6TkO1bnWXlxGHUbIPYO5oma75Sluds8uqTOEuJO6yUhxsTOq oyyIkmEzeqGGbeQX3MVGC5gm7/KMA1Drl4bOqEeGnfGm3B5BzO5/QYYhd3xIY/ALErIZ qQ1gPxBlVbVWRokeFBIpvkfiMkVQ9kUUwRcTCHtABK57vghwwtnbA85VDd+wZGyF0adO wfo9StYcw82AlN/QF6h6FQQbwSR7jzoVVaJiVt+BxZyjDqXSXrZkbX/3B+/uVGIVrlwt NyHw== X-Gm-Message-State: ACrzQf0xk0ZfTdnzAJRH95234HL6EGVNCsHeSNLHJAkFy9bmga3oLwJw Z0JjnKOQZr1LNsba2DTOO6xka12+DXkMJw== X-Received: by 2002:a17:902:a9c6:b0:178:b2d4:f8b2 with SMTP id b6-20020a170902a9c600b00178b2d4f8b2mr2633464plr.79.1665025886668; Wed, 05 Oct 2022 20:11:26 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:26 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 09/24] accel/tcg: Introduce tb_{set_}page_addr{0,1} Date: Wed, 5 Oct 2022 20:10:58 -0700 Message-Id: <20221006031113.1139454-10-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::629; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x629.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This data structure will be replaced for user-only: add accessors. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- include/exec/exec-all.h | 22 ++++++++++++++++++++++ accel/tcg/cpu-exec.c | 9 +++++---- accel/tcg/tb-maint.c | 29 +++++++++++++++-------------- accel/tcg/translate-all.c | 16 ++++++++-------- accel/tcg/translator.c | 9 +++++---- 5 files changed, 55 insertions(+), 30 deletions(-) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index b5bde1b56a..5900f4637b 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -610,6 +610,28 @@ static inline uint32_t tb_cflags(const TranslationBlock *tb) return qatomic_read(&tb->cflags); } +static inline tb_page_addr_t tb_page_addr0(const TranslationBlock *tb) +{ + return tb->page_addr[0]; +} + +static inline tb_page_addr_t tb_page_addr1(const TranslationBlock *tb) +{ + return tb->page_addr[1]; +} + +static inline void tb_set_page_addr0(TranslationBlock *tb, + tb_page_addr_t addr) +{ + tb->page_addr[0] = addr; +} + +static inline void tb_set_page_addr1(TranslationBlock *tb, + tb_page_addr_t addr) +{ + tb->page_addr[1] = addr; +} + /* current cflags for hashing/comparison */ uint32_t curr_cflags(CPUState *cpu); diff --git a/accel/tcg/cpu-exec.c b/accel/tcg/cpu-exec.c index f9e5cc9ba0..70eb8a58d9 100644 --- a/accel/tcg/cpu-exec.c +++ b/accel/tcg/cpu-exec.c @@ -187,13 +187,14 @@ static bool tb_lookup_cmp(const void *p, const void *d) const struct tb_desc *desc = d; if ((TARGET_TB_PCREL || tb_pc(tb) == desc->pc) && - tb->page_addr[0] == desc->page_addr0 && + tb_page_addr0(tb) == desc->page_addr0 && tb->cs_base == desc->cs_base && tb->flags == desc->flags && tb->trace_vcpu_dstate == desc->trace_vcpu_dstate && tb_cflags(tb) == desc->cflags) { /* check next page if needed */ - if (tb->page_addr[1] == -1) { + tb_page_addr_t tb_phys_page1 = tb_page_addr1(tb); + if (tb_phys_page1 == -1) { return true; } else { tb_page_addr_t phys_page1; @@ -210,7 +211,7 @@ static bool tb_lookup_cmp(const void *p, const void *d) */ virt_page1 = TARGET_PAGE_ALIGN(desc->pc); phys_page1 = get_page_addr_code(desc->env, virt_page1); - if (tb->page_addr[1] == phys_page1) { + if (tb_phys_page1 == phys_page1) { return true; } } @@ -1016,7 +1017,7 @@ int cpu_exec(CPUState *cpu) * direct jump to a TB spanning two pages because the mapping * for the second page can change. */ - if (tb->page_addr[1] != -1) { + if (tb_page_addr1(tb) != -1) { last_tb = NULL; } #endif diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 7f4e1e1299..15ec2f741d 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -44,8 +44,8 @@ static bool tb_cmp(const void *ap, const void *bp) a->flags == b->flags && (tb_cflags(a) & ~CF_INVALID) == (tb_cflags(b) & ~CF_INVALID) && a->trace_vcpu_dstate == b->trace_vcpu_dstate && - a->page_addr[0] == b->page_addr[0] && - a->page_addr[1] == b->page_addr[1]); + tb_page_addr0(a) == tb_page_addr0(b) && + tb_page_addr1(a) == tb_page_addr1(b)); } void tb_htable_init(void) @@ -273,7 +273,7 @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list) qemu_spin_unlock(&tb->jmp_lock); /* remove the TB from the hash list */ - phys_pc = tb->page_addr[0]; + phys_pc = tb_page_addr0(tb); h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)), tb->flags, orig_cflags, tb->trace_vcpu_dstate); if (!qht_remove(&tb_ctx.htable, tb, h)) { @@ -282,10 +282,11 @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list) /* remove the TB from the page list */ if (rm_from_page_list) { - p = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); + p = page_find(phys_pc >> TARGET_PAGE_BITS); tb_page_remove(p, tb); - if (tb->page_addr[1] != -1) { - p = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + phys_pc = tb_page_addr1(tb); + if (phys_pc != -1) { + p = page_find(phys_pc >> TARGET_PAGE_BITS); tb_page_remove(p, tb); } } @@ -358,16 +359,16 @@ static inline void page_unlock_tb(const TranslationBlock *tb) { } /* lock the page(s) of a TB in the correct acquisition order */ static void page_lock_tb(const TranslationBlock *tb) { - page_lock_pair(NULL, tb->page_addr[0], NULL, tb->page_addr[1], false); + page_lock_pair(NULL, tb_page_addr0(tb), NULL, tb_page_addr1(tb), false); } static void page_unlock_tb(const TranslationBlock *tb) { - PageDesc *p1 = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); + PageDesc *p1 = page_find(tb_page_addr0(tb) >> TARGET_PAGE_BITS); page_unlock(p1); - if (unlikely(tb->page_addr[1] != -1)) { - PageDesc *p2 = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + if (unlikely(tb_page_addr1(tb) != -1)) { + PageDesc *p2 = page_find(tb_page_addr1(tb) >> TARGET_PAGE_BITS); if (p2 != p1) { page_unlock(p2); @@ -382,7 +383,7 @@ static void page_unlock_tb(const TranslationBlock *tb) */ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr) { - if (page_addr == -1 && tb->page_addr[0] != -1) { + if (page_addr == -1 && tb_page_addr0(tb) != -1) { page_lock_tb(tb); do_tb_phys_invalidate(tb, true); page_unlock_tb(tb); @@ -516,11 +517,11 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, if (n == 0) { /* NOTE: tb_end may be after the end of the page, but it is not a problem */ - tb_start = tb->page_addr[0]; + tb_start = tb_page_addr0(tb); tb_end = tb_start + tb->size; } else { - tb_start = tb->page_addr[1]; - tb_end = tb_start + ((tb->page_addr[0] + tb->size) + tb_start = tb_page_addr1(tb); + tb_end = tb_start + ((tb_page_addr0(tb) + tb->size) & ~TARGET_PAGE_MASK); } if (!(tb_end <= start || tb_start >= end)) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 5e28e9fccd..bef4c56cff 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -698,9 +698,9 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) } assert_page_locked(pd); PAGE_FOR_EACH_TB(pd, tb, n) { - if (page_trylock_add(set, tb->page_addr[0]) || - (tb->page_addr[1] != -1 && - page_trylock_add(set, tb->page_addr[1]))) { + if (page_trylock_add(set, tb_page_addr0(tb)) || + (tb_page_addr1(tb) != -1 && + page_trylock_add(set, tb_page_addr1(tb)))) { /* drop all locks, and reacquire in order */ g_tree_foreach(set->tree, page_entry_unlock, NULL); goto retry; @@ -771,8 +771,8 @@ TranslationBlock *tb_gen_code(CPUState *cpu, tb->flags = flags; tb->cflags = cflags; tb->trace_vcpu_dstate = *cpu->trace_dstate; - tb->page_addr[0] = phys_pc; - tb->page_addr[1] = -1; + tb_set_page_addr0(tb, phys_pc); + tb_set_page_addr1(tb, -1); tcg_ctx->tb_cflags = cflags; tb_overflow: @@ -970,7 +970,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu, * a temporary one-insn TB, and we have nothing left to do. Return early * before attempting to link to other TBs or add to the lookup table. */ - if (tb->page_addr[0] == -1) { + if (tb_page_addr0(tb) == -1) { return tb; } @@ -985,7 +985,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu, * No explicit memory barrier is required -- tb_link_page() makes the * TB visible in a consistent state. */ - existing_tb = tb_link_page(tb, tb->page_addr[0], tb->page_addr[1]); + existing_tb = tb_link_page(tb, tb_page_addr0(tb), tb_page_addr1(tb)); /* if the TB already exists, discard what we just translated */ if (unlikely(existing_tb != tb)) { uintptr_t orig_aligned = (uintptr_t)gen_code_buf; @@ -1140,7 +1140,7 @@ static gboolean tb_tree_stats_iter(gpointer key, gpointer value, gpointer data) if (tb->size > tst->max_target_size) { tst->max_target_size = tb->size; } - if (tb->page_addr[1] != -1) { + if (tb_page_addr1(tb) != -1) { tst->cross_page++; } if (tb->jmp_reset_offset[0] != TB_JMP_RESET_OFFSET_INVALID) { diff --git a/accel/tcg/translator.c b/accel/tcg/translator.c index 8e78fd7a9c..061519691f 100644 --- a/accel/tcg/translator.c +++ b/accel/tcg/translator.c @@ -157,7 +157,7 @@ static void *translator_access(CPUArchState *env, DisasContextBase *db, tb = db->tb; /* Use slow path if first page is MMIO. */ - if (unlikely(tb->page_addr[0] == -1)) { + if (unlikely(tb_page_addr0(tb) == -1)) { return NULL; } @@ -169,13 +169,14 @@ static void *translator_access(CPUArchState *env, DisasContextBase *db, host = db->host_addr[1]; base = TARGET_PAGE_ALIGN(db->pc_first); if (host == NULL) { - tb->page_addr[1] = + tb_page_addr_t phys_page = get_page_addr_code_hostp(env, base, &db->host_addr[1]); + /* We cannot handle MMIO as second page. */ + assert(phys_page != -1); + tb_set_page_addr1(tb, phys_page); #ifdef CONFIG_USER_ONLY page_protect(end); #endif - /* We cannot handle MMIO as second page. */ - assert(tb->page_addr[1] != -1); host = db->host_addr[1]; } From patchwork Thu Oct 6 03:10:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612852 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1193912pvb; Wed, 5 Oct 2022 20:23:33 -0700 (PDT) X-Google-Smtp-Source: AMsMyM5edd0kK356FqJXlPZIMB0ZANdH9ovFd3yNgfo8U+qs3NZfkh7ScLsSuZBwppXX5bm6vxnM X-Received: by 2002:a05:620a:1b81:b0:6ce:b222:a46e with SMTP id dv1-20020a05620a1b8100b006ceb222a46emr1904974qkb.126.1665026613305; Wed, 05 Oct 2022 20:23:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026613; cv=none; d=google.com; s=arc-20160816; b=O8KKMGIqBo1Jog58EHF30zxri0YYjQuGXcc07QD6MKTH4xi9kOCx59jFGNrVfDVuGm aYtxxiBbZNfqPE1Z1uYnaDMhcjAx4ZPVYcnIAVaxWlK4fbJiE/+1LQKM0NKeQMNKcfHn hN5XEoSq+lUI4GXx+X1swII8UB/q+OSf2XQ79/mb7DEf6bqRLSxTMgI3chTVgIiCkTsI 53JaKlmjcrcfxZ9MjMaQwqiMKiSY4OMNxB/2woZZ9FOoV0k0hIvCvNSMtGuwWLpzpZeq wXVi0qEIwFaR4LERgnahYtTTlJX+egQfCUIkkM3VoZxUPqM0OC+sXoY+5piIg0LTv9Zl Q8aQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=rigLTfOjEWhrby8dAtjv2G2EQPsehXz3okL3LRGUdH4=; b=l/sey+wgr7VTjB4cnQVwLUdYB4tfl1uyHZFfmxtia5ovHCy9/dL0GLVu36i6k5sYz+ J+J2MpNkXzZ30MWnWi7+cYISi2ieZ62s06TUhnQzXz0+T9XT9YQejR9QohxcuzIakFvp TM21DEGcmRwkNXSbZLM5di/eCv2IsKx4Q3p5tvBtoSan4cRbqk+0jNU8nYWMmz+CkJa8 jbdmB/ZDsFeFYbaQ6nN/JZNzivcQSTRk5eouywdvmEVWr5av3KvAZPafqTTkQmgxmrjH /75g+joD+f14jlTPGqKGNkKXKY8QRGt65RZz+Voj2v1WC3H1ZS9unTaPvp2147HrQkgW xEkA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mTVByTsW; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id l17-20020a05622a175100b0031eadd7b8cbsi7584095qtk.643.2022.10.05.20.23.33 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:23:33 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=mTVByTsW; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:36612 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHTw-0006FC-Lh for patch@linaro.org; Wed, 05 Oct 2022 23:23:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48130) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIJ-0005bT-69 for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:31 -0400 Received: from mail-pj1-x1035.google.com ([2607:f8b0:4864:20::1035]:42639) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIH-0006xa-Gh for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:30 -0400 Received: by mail-pj1-x1035.google.com with SMTP id l1-20020a17090a72c100b0020a6949a66aso533161pjk.1 for ; Wed, 05 Oct 2022 20:11:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=rigLTfOjEWhrby8dAtjv2G2EQPsehXz3okL3LRGUdH4=; b=mTVByTsW/smJ+SUY88RjumFYZ+2Nn6vIKiQGOF9dEkOUVzuSbxFHuP3FVKMLDB71oY QfzoI4u2G1WGXWglm03iNr+RcPvo16yArSQcTtGQ6YY3p6uLRS3AoP30rqZ8B15bMsZ4 6HvoXGU1e/7tl4lZxBAqWxNciE6Ym+3LK9eqlX5/5bcxxUo/GAh5h8u/rhmZnyiTMUCN WdXgtXOsevfSBu3aBJ9LT5TvFllSb35D2ALsjgl6YYThR9pLMTR9B+iUCsMniFBj6V7J U6s+BqgrT2mLkJ09I+J+jEFhGWnEnT3Of5aitA5YUY8IQpaEnSWFLPaqwfi0T2MwFKM6 byXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=rigLTfOjEWhrby8dAtjv2G2EQPsehXz3okL3LRGUdH4=; b=UTQp58GpHFKy2MCgKybpUOPlr9wBjxaDEYUO+KQPIN0hhraSdS0Qi6CfFrun0WWUyv JAbw5bN3GZr/yekZpGEcNCMINbzfWXJdbxO7HHPvfyvnyBkIPdwAAu3MegfDYF8t1yYn tmpdCOW11fZWKLwIfi79sfbQN9PO+f/GhU2aECcjiNdiyzChXidYH0Mc849ca5HVm/aF E9EoqV5DU7Ok0CQqGGY7iYU0Zbt6zae39gxb2buQ8fl7EGbWq7Yl0sgBRSOS84lQ3KRX VSIQq/h0LO3syuoDGgmDRq0Dgal6gfmsfJ0HXsUmvU+7pA4aw8XNCy4Vaw8+fC4y9nIj BRNw== X-Gm-Message-State: ACrzQf0OH2SXJWMhIOqPvZUkHRvU35+nyGryAxl5EaxAeaEz7SgYas9o ZCGrleV5zOoM0vgul8Q8etUDm0fqXNWuug== X-Received: by 2002:a17:903:32d2:b0:17a:e62:16e2 with SMTP id i18-20020a17090332d200b0017a0e6216e2mr2302608plr.93.1665025888062; Wed, 05 Oct 2022 20:11:28 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:27 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 10/24] accel/tcg: Rename tb_invalidate_phys_page Date: Wed, 5 Oct 2022 20:10:59 -0700 Message-Id: <20221006031113.1139454-11-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::1035; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1035.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Rename to tb_invalidate_phys_page_unwind to emphasize that we also detect invalidating the current TB, and also to free up that name for other usage. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/internal.h | 2 +- accel/tcg/tb-maint.c | 2 +- accel/tcg/translate-all.c | 5 +++-- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 1a704ee14f..1227bb69bd 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -105,7 +105,7 @@ void tb_htable_init(void); void tb_reset_jump(TranslationBlock *tb, int n); TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, tb_page_addr_t phys_page2); -bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc); +bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc); int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb, uintptr_t searched_pc, bool reset_icount); diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 15ec2f741d..92170cbbc1 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -657,7 +657,7 @@ void tb_invalidate_phys_page_fast(struct page_collection *pages, * TB (because it was modified by this store and the guest CPU has * precise-SMC semantics). */ -bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc) +bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) { TranslationBlock *tb; PageDesc *p; diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index bef4c56cff..aa8d213514 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1382,7 +1382,7 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) if (!(p->flags & PAGE_WRITE) && (flags & PAGE_WRITE) && p->first_tb) { - tb_invalidate_phys_page(addr, 0); + tb_invalidate_phys_page_unwind(addr, 0); } if (reset_target_data) { g_free(p->target_data); @@ -1580,7 +1580,8 @@ int page_unprotect(target_ulong address, uintptr_t pc) /* and since the content will be modified, we must invalidate the corresponding translated code. */ - current_tb_invalidated |= tb_invalidate_phys_page(addr, pc); + current_tb_invalidated |= + tb_invalidate_phys_page_unwind(addr, pc); } mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, prot & PAGE_BITS); From patchwork Thu Oct 6 03:11:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612853 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1194026pvb; Wed, 5 Oct 2022 20:23:52 -0700 (PDT) X-Google-Smtp-Source: AMsMyM65Bjev6kn/R2ZRzfbFlAvS/j6jW1r2GpbshkFD3SyFmPeNuEKnosU35u+EjUTwqdAqV8ES X-Received: by 2002:a05:620a:15d2:b0:6cf:2d38:9c0d with SMTP id o18-20020a05620a15d200b006cf2d389c0dmr1918375qkm.426.1665026631985; Wed, 05 Oct 2022 20:23:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026631; cv=none; d=google.com; s=arc-20160816; b=xCz2gswGc7mgNzJUjk1X6JLHIn4pGZs1xVWK4mj3gNeht31qk621hj+s1WHljNcc9w QLF/bhn8aDNbJSDWBl4wVqWOaZSBGhTDH/g7Em/cFOuLw0TbW59La7+iGQB+FMDry0Wv bKmg/0Gn9JvHFVEERv3m4/duj2bMpyfxHoc6ixiL69CTXcWr5q1hPLf5v4JzBtA2nOx5 NWHBqp8BkaL3T5vCFq4JSWdCCumHqh0yRNnG4CwK/7h2aVAgN7uniHtjgFZNuM80tRWX lPfQyAyxTRl5wdgzxnTVULwSJp5FSIKSh/yP7gTM8u3kXAq5yZ29/3tEkR96Z27UvSS5 jG1Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=a7Aglx93RFVrz7iNIRH3QorEiqL8BPiOrMD+Ih+/A4s=; b=NQuOME9ggLyds0PlFkjkycGmm4ouEQu6WIUoJJ6T9Ir3eDprhaTMLGxgItgGuzD+LJ +6EN46Zf0r/suuWwOg9jpOq2EeVBAihSANY+sHXHVokMaU3cDyYa6EA5yR+6GjbbosDc ZGzrwwO95s7N+d2Rws0rJ7rpaFFfGb34Qx1xTth+FMDIKFE3pVjleuRrCoPIeUwgovZJ NggT2NDQiVgC1PnaMQmsaysFtFlHgNVzCbbPX/KNbvw0oUUz/Ijd/H1ZOvQQAbeaO/K/ 6PYObcknKopeKUIka7M/fQ82RxmWjBrQAh1xC0VXIuJTMrrHeeDHAiIYFq+lN9FS5Lo3 l+7g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=uFCPEFgt; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id m3-20020a05620a24c300b006bbb2424e51si7661177qkn.545.2022.10.05.20.23.51 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:23:51 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=uFCPEFgt; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:32958 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHUF-0007H5-GZ for patch@linaro.org; Wed, 05 Oct 2022 23:23:51 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48132) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIJ-0005fl-UG for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:31 -0400 Received: from mail-pf1-x430.google.com ([2607:f8b0:4864:20::430]:41868) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHII-0006xh-Bs for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:31 -0400 Received: by mail-pf1-x430.google.com with SMTP id g28so853136pfk.8 for ; Wed, 05 Oct 2022 20:11:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=a7Aglx93RFVrz7iNIRH3QorEiqL8BPiOrMD+Ih+/A4s=; b=uFCPEFgtmw/LX60HcDFFguvWylfRw0B6cwtJySQohTBaxxxP69DFdPPgL14UyIDD7U +rCxcoPULS6l9pDwqEZiFKwf71QKfPl8H0c+7uvyxOQY1fdCWv5Wjd9lwM3bgHRpX/G+ 642IzKWAlCLWJ3eQSN3XIsPBySpMtEvQOoLwuo01pyEFZJZnPxcYRDC3adCdDprB21+I 1p7cSp4spc/EgKQECFVVBke3vKRP8XKHrLEmR9LYykmLnFqW3Ro4NDcCflhurTXkfxfL bz6qxXgXPEwGQncSO3a+9sEVHUP6yVhsACZ4xGqqjqp1jba7ChwKDYynLBx6jRqxfOns neQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=a7Aglx93RFVrz7iNIRH3QorEiqL8BPiOrMD+Ih+/A4s=; b=ilMb8vGHIiHwVZtiLg+wIrXQp+frF34bQg997o01iWTj0r7D+VRdqr32LKjYKMK5GW H1N8wMm2BTb7a3fMkExDvCZKWEmUNTe0qYmt/Fj8yfXkwQ/zOWUKcTbm93ARvE0C5h82 Lff6Wo9hhWEm7h8S/dWD63yM08StURjScTOGRfMdO6sQKldrbTW7pYTGWMrtmsKxcm4u 5LZ9WGS2aRD/U0z5mAZnWNoV3Yw96yPF61Lo1qutbVGQ3TkXKAte6rZtJvkefSZC8mmW Oijv9WG8ygui1Bxm5KqmIb+uxWB5885Tv+OU6A6bxvzkx2BPPRz0vS8catx6JrAqmvob qeFg== X-Gm-Message-State: ACrzQf2A8oE813G4Mzi/rRdpgqXyyZ7M6wJbp55N1XAnH4m6ffSQJxo/ tPiBbslVmL/h97DakKZs/ZFt7jRdtCav0w== X-Received: by 2002:a63:4283:0:b0:457:dced:8ba3 with SMTP id p125-20020a634283000000b00457dced8ba3mr2571816pga.220.1665025889043; Wed, 05 Oct 2022 20:11:29 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:28 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 11/24] accel/tcg: Rename tb_invalidate_phys_page_range and drop end parameter Date: Wed, 5 Oct 2022 20:11:00 -0700 Message-Id: <20221006031113.1139454-12-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::430; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This function is is never called with a real range, only for a single page. Drop the second parameter and rename to tb_invalidate_phys_page. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- include/exec/translate-all.h | 2 +- accel/tcg/tb-maint.c | 15 ++++++++------- cpu.c | 4 ++-- 3 files changed, 11 insertions(+), 10 deletions(-) diff --git a/include/exec/translate-all.h b/include/exec/translate-all.h index 9f646389af..3e9cb91565 100644 --- a/include/exec/translate-all.h +++ b/include/exec/translate-all.h @@ -29,7 +29,7 @@ void page_collection_unlock(struct page_collection *set); void tb_invalidate_phys_page_fast(struct page_collection *pages, tb_page_addr_t start, int len, uintptr_t retaddr); -void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end); +void tb_invalidate_phys_page(tb_page_addr_t addr); void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr); #ifdef CONFIG_USER_ONLY diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 92170cbbc1..bac43774c0 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -565,25 +565,26 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, } /* - * Invalidate all TBs which intersect with the target physical address range - * [start;end[. NOTE: start and end must refer to the *same* physical page. - * 'is_cpu_write_access' should be true if called from a real cpu write - * access: the virtual CPU will exit the current TB if code is modified inside - * this TB. + * Invalidate all TBs which intersect with the target physical + * address page @addr. * * Called with mmap_lock held for user-mode emulation */ -void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end) +void tb_invalidate_phys_page(tb_page_addr_t addr) { struct page_collection *pages; + tb_page_addr_t start, end; PageDesc *p; assert_memory_lock(); - p = page_find(start >> TARGET_PAGE_BITS); + p = page_find(addr >> TARGET_PAGE_BITS); if (p == NULL) { return; } + + start = addr & TARGET_PAGE_MASK; + end = start + TARGET_PAGE_SIZE; pages = page_collection_lock(start, end); tb_invalidate_phys_page_range__locked(pages, p, start, end, 0); page_collection_unlock(pages); diff --git a/cpu.c b/cpu.c index 14365e36f3..2a09b05205 100644 --- a/cpu.c +++ b/cpu.c @@ -277,7 +277,7 @@ void list_cpus(const char *optarg) void tb_invalidate_phys_addr(target_ulong addr) { mmap_lock(); - tb_invalidate_phys_page_range(addr, addr + 1); + tb_invalidate_phys_page(addr); mmap_unlock(); } #else @@ -298,7 +298,7 @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs) return; } ram_addr = memory_region_get_ram_addr(mr) + addr; - tb_invalidate_phys_page_range(ram_addr, ram_addr + 1); + tb_invalidate_phys_page(ram_addr); } #endif From patchwork Thu Oct 6 03:11:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612856 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1194603pvb; Wed, 5 Oct 2022 20:25:40 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4G/AJ/tTeW0yeCez9SO6S7Wb80iJrjNlWtt3kCMlTuYDYvy3IfBD8wbUgCbLY/yfgXMqgc X-Received: by 2002:a05:622a:1a1a:b0:393:aff4:c3f4 with SMTP id f26-20020a05622a1a1a00b00393aff4c3f4mr2028535qtb.350.1665026740069; Wed, 05 Oct 2022 20:25:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026740; cv=none; d=google.com; s=arc-20160816; b=qamcjK0rQSicrR/HFmwZpJD5MqzDsZTlaYCsP3tw6OHvwIqLcQUvGDYf/mEQQPdcX6 lyEqnOEkCe83b4GHA3to3XAICEEHWUud4Bwe4/prWNkfJaW5TLm+8vHkSEdrMtACb7OF PmIIV3Yvgn/wCBIAtQKh05TI3nPRSa4+abI8fO1AjeqlkRucwWlJ7TDJ+oia4Q4Tg+DX Rm2uzDuRLkUmEn9EhqUgaLQ6+SBM6rSQ5aEQnC3Cenrw5yf6Y/PbEiO3wBM0jSBfXP9/ anqAew/4VVsGayDcRhaIKAvdqnAljw4m4giqGLtyHx6MbgS7M1GmglFp2aiBmy7glLhW /6JA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=ULPvu65y7cKgYxJ4IWHn+ywuUWPFC8b08pn8/4qmNEE=; b=HDJyXXKZQv5dE7l4Yj3G6RG+XHbUkwj0Vx+pJWmZiscrA+Ax+QLGKh16MrR4Py+uzh BT9C6YEVQLyYMIjN2Fxn2yNlGDBlaU9rMDBxZKBgVVKA3TI+lestoFS4M29sTIRvpWZX saGWEGDPk4PHTQyYR/rqcixWSKleNHj4qYdmKf0YJ3fpkPdQGesNBTiabOFD7U/g76cC mLTZwU9/8dr7SSkQW5a9xCDKCzld1rFoyU9PGtI9X9haur/qcYFYQ0zf/GYOAUBjO0qd R7p6htJZbsnjP6FN/trVfojyNuOD0kl1Vq1wSMZGhb1RMgj3Tg+96Uu+3BJAvU14T3T2 88Cw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MtiqkBOI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id jp6-20020ad45f86000000b004b1d5953a4csi449198qvb.481.2022.10.05.20.25.39 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:25:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=MtiqkBOI; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:53032 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHVz-00034P-Ji for patch@linaro.org; Wed, 05 Oct 2022 23:25:39 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48134) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIM-0005lJ-BR for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:34 -0400 Received: from mail-pf1-x435.google.com ([2607:f8b0:4864:20::435]:44952) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIJ-0006xx-Lh for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:33 -0400 Received: by mail-pf1-x435.google.com with SMTP id v186so835268pfv.11 for ; Wed, 05 Oct 2022 20:11:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=ULPvu65y7cKgYxJ4IWHn+ywuUWPFC8b08pn8/4qmNEE=; b=MtiqkBOI22xSrSSjkzdV6NySrX5QpdEqBEfwSnB/9CLp90Rb/VbntibIgK74D7UGtk SurOo7sARKsniunb5IpiQwW1ijVkcb6OHwP2Wd7gXArGxeoyZ6Jpv0JOGMDnAhLfxMxc 7fGLXqb/SwpERp7qU2PFwmI0DCBxNFSeFrpT2cqxa7tSOquOFcJgBMm7NApmSsBFo1m3 CgSKq2q3h83R1DhCkD1F+t84l1dy2C4wxpTO+G5IZeE/b4IftoRa+8+eHdZugQk1kC5Z /1Boe8x2PNVLa9oHoCAdJidvue61Tp+GQo977ZXCyNvAfw/NVPRvVhk6BKM6WWN7Mlo8 WISQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=ULPvu65y7cKgYxJ4IWHn+ywuUWPFC8b08pn8/4qmNEE=; b=SUtF+WdXTDdVjSl3YlJzUIIbu9C2nJDo15qTuHqVQGobdDnk3C1uCLUTEV4Z85liQ4 e6DyPRyY8Mrm5NmNg61JMHNdCddGVI8g6q/MGiEPYy5skCodZRZ/I8PQKdZ2Xi8bk8me x2+ertYHXn67iqSN0wWb3wWj+2vpcVnGdommbSMePd+8stF50jK3BnEicNrOQkwUVbLd KMOv7fwH0HKzZbVCiK2eePQ4qWIsdbFwDgc+T/2auQvl1XpsVp79gv5jLzk0kZ0k70Fm c/BE4GgJtLEDwYDM0NgZDevL48mAuwsuwcOyWrqq8N4j3cn3dtvQm0RXfBRvJg5qiOE3 C8cg== X-Gm-Message-State: ACrzQf3WyVsZQuZrOdgGQEjv8oYAK1inqE+w5cOevehnO/kGjKCF+w21 Y3e8QwztdCVdub9MqMkXF6a+71QvZgBbBQ== X-Received: by 2002:a05:6a00:2314:b0:546:ce91:89a3 with SMTP id h20-20020a056a00231400b00546ce9189a3mr2803311pfh.77.1665025890166; Wed, 05 Oct 2022 20:11:30 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:29 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 12/24] accel/tcg: Unify declarations of tb_invalidate_phys_range Date: Wed, 5 Oct 2022 20:11:01 -0700 Message-Id: <20221006031113.1139454-13-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::435; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" We missed this function when we introduced tb_page_addr_t. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- include/exec/exec-all.h | 2 +- include/exec/ram_addr.h | 2 -- accel/tcg/tb-maint.c | 13 ++----------- 3 files changed, 3 insertions(+), 14 deletions(-) diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 5900f4637b..5ae484e34d 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -638,12 +638,12 @@ uint32_t curr_cflags(CPUState *cpu); /* TranslationBlock invalidate API */ #if defined(CONFIG_USER_ONLY) void tb_invalidate_phys_addr(target_ulong addr); -void tb_invalidate_phys_range(target_ulong start, target_ulong end); #else void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs); #endif void tb_flush(CPUState *cpu); void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr); +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end); void tb_set_jmp_target(TranslationBlock *tb, int n, uintptr_t addr); /* GETPC is the true target of the return instruction that we'll execute. */ diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index f3e0c78161..1500680458 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -147,8 +147,6 @@ static inline void qemu_ram_block_writeback(RAMBlock *block) #define DIRTY_CLIENTS_ALL ((1 << DIRTY_MEMORY_NUM) - 1) #define DIRTY_CLIENTS_NOCODE (DIRTY_CLIENTS_ALL & ~(1 << DIRTY_MEMORY_CODE)) -void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end); - static inline bool cpu_physical_memory_get_dirty(ram_addr_t start, ram_addr_t length, unsigned client) diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index bac43774c0..c8e921089d 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -20,6 +20,7 @@ #include "qemu/osdep.h" #include "exec/cputlb.h" #include "exec/log.h" +#include "exec/exec-all.h" #include "exec/translate-all.h" #include "sysemu/tcg.h" #include "tcg/tcg.h" @@ -27,12 +28,6 @@ #include "tb-context.h" #include "internal.h" -/* FIXME: tb_invalidate_phys_range is declared in different places. */ -#ifdef CONFIG_USER_ONLY -#include "exec/exec-all.h" -#else -#include "exec/ram_addr.h" -#endif static bool tb_cmp(const void *ap, const void *bp) { @@ -599,11 +594,7 @@ void tb_invalidate_phys_page(tb_page_addr_t addr) * * Called with mmap_lock held for user-mode emulation. */ -#ifdef CONFIG_SOFTMMU -void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end) -#else -void tb_invalidate_phys_range(target_ulong start, target_ulong end) -#endif +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) { struct page_collection *pages; tb_page_addr_t next; From patchwork Thu Oct 6 03:11:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612859 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1195572pvb; Wed, 5 Oct 2022 20:28:44 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7ZXMzPoNOxilXYo5nPjvQd/QMyFi41JSsJD15ZGdRXw7tTSo7SfoYrmreZLkZglS9pZqdp X-Received: by 2002:a0c:8041:0:b0:4af:b13b:2624 with SMTP id 59-20020a0c8041000000b004afb13b2624mr2428665qva.92.1665026924264; Wed, 05 Oct 2022 20:28:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026924; cv=none; d=google.com; s=arc-20160816; b=kU4068jyrtMebqkHarBaUbvjIRZ6RflQL/s1cXLh/a3/pvVdndeqCLkwrwX8q5zvpv XLLwymzpkKE/vsmfOHlQ/jjAN+XxFqzNDtmXqqub1A6hahtvKx7vy8nSgQZA+1fxruIZ C6KOupzLQtsPdPI3cFDSnVPFlDcray9F3TB8TOpr5NEfEVCDTQGvCChbMJLBXRtFxCc/ HFFEQAgrTzC5rWwo9Rz0qsjMW6TYKfB6h8YMA7EI31xfe6+SEajcIoy2+3ZFQM1rb9RE cjXEvVbnv+sSep9KQNz+REY3/hTe4bFZw7rS5M48Qh/eUNEnEyTLI47jMD0PdnVtby72 8EAg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=qe41tPmrZPNm4lsk1VtLTF/awnlvrW85rsXFJpGy/rU=; b=odZYTY7nP/LuSHYZxWqdVADwJ5KQ8CR/FYvLKTp5kJbXYPbewAh+qsusrawr7DAgvc w79HnQXQqJY3LCatMiosW1CB4yJeg/Wsn2CULPYtFI2kEHjE7A5dEnp8DM2haFMOSIJf 5Vj4oPti6wsrJi0pJOseUiebZNkheOFaB7yYeX3L8HMCuvVgmyyLJdzRVCkYb3h0SOAQ 6RxVnTlKbJyvY8AN7Su5Kh2pnHWBTH5TjDekyl5OaAmcEJM2sh5MVT0yjs4nlxMy6olQ 3ZTRO1aWu59UaBaN6RmwQSs74yXw3Rd4SVXZ0tQOu19c+Up646lHqkzpfBcLf7kmnMqM 5GYQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BqsbHKo5; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q13-20020a05620a2a4d00b006dfaa1bfaa8si2770193qkp.448.2022.10.05.20.28.44 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:28:44 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=BqsbHKo5; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:40406 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHYx-0007UY-OV for patch@linaro.org; Wed, 05 Oct 2022 23:28:43 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42006) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIN-0005qg-Q7 for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:35 -0400 Received: from mail-pg1-x52d.google.com ([2607:f8b0:4864:20::52d]:39811) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIK-0006yB-Uj for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:35 -0400 Received: by mail-pg1-x52d.google.com with SMTP id b5so706015pgb.6 for ; Wed, 05 Oct 2022 20:11:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=qe41tPmrZPNm4lsk1VtLTF/awnlvrW85rsXFJpGy/rU=; b=BqsbHKo56x4zJXrgkOYBIzdlv2kyo/5jTC+OT18QmOktP07b6jzLe5G4RazD4ZN1MW rwkF9h1MHd0YGknwrJ/8ojFb6jELHWra/kJesCzFyoQio9YIgnSRwvEuPHg40oWEahBl 4x8C5VKxws6KP6uA4wCGm8YXb1Di/0UVyw9rQwVXIGNgBtp27dHyxFqvhd02A7zGH4ST LUyubscjrjnc7tbmSTAykhje70ZSSSkwBnY748zBs/OA0KhJwX/IaZInO7GJth5lSTa9 m+ne+vmpFH3e86MyQPyIEPmTYlPca+Gad5mrF6r/RuyOzT88d1jixIXZyoQifMCBoMn+ Qekw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=qe41tPmrZPNm4lsk1VtLTF/awnlvrW85rsXFJpGy/rU=; b=zOMqbwoI69to1eZF0+mZ3DI4Z3B4ulnc5PWnnyNYBEgTfRyJl1VctildluhwZeNaDg HEjS6jJqpErQvAPRFMVl5rnO6m1XSCrN9el7g+k3wB0hxz+2P7Aby+jbykKvZJt/OpLd HE/PCfz5o4s5pEZASGwHZ4dVWz79cNsP1uFazSmv97hDbJbl6qDn/Y+7zuETV4MhTsfm EM5YR7lo/bCpMw2PxbTzD3i42tn8uqUzK7BxzpmRNywK9B9lIx9vS46sWi2RbOYYNZKm SUVuLAIccn6uie0bqpFCtZVUNH9A/3NR9TBM8xZYfQt7IPpRUaP0eZGaMfcek1tOYIsE PzQQ== X-Gm-Message-State: ACrzQf1hCA2ygG00h+QVui6Nf76NAhfT/YJN5lu8G3Uv/8QkGGGgqhYT iDaGKdONfMb2sjNI9rTSxnLL8XLbCM3pTQ== X-Received: by 2002:a63:f010:0:b0:446:13df:7018 with SMTP id k16-20020a63f010000000b0044613df7018mr2490378pgh.546.1665025891618; Wed, 05 Oct 2022 20:11:31 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:30 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 13/24] accel/tcg: Use tb_invalidate_phys_page in page_set_flags Date: Wed, 5 Oct 2022 20:11:02 -0700 Message-Id: <20221006031113.1139454-14-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::52d; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x52d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" We do not require detection of overlapping TBs here, so use the more appropriate function. Signed-off-by: Richard Henderson --- accel/tcg/translate-all.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index aa8d213514..8d5233fa9e 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1382,7 +1382,7 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) if (!(p->flags & PAGE_WRITE) && (flags & PAGE_WRITE) && p->first_tb) { - tb_invalidate_phys_page_unwind(addr, 0); + tb_invalidate_phys_page(addr); } if (reset_target_data) { g_free(p->target_data); From patchwork Thu Oct 6 03:11:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612854 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1194404pvb; Wed, 5 Oct 2022 20:25:03 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7Uup5IhITJzSuKLOAYnHyuX2KWx2NCi4VFWeN7dV6tLn4eqW9JUenBTvANBiKvVEeGim4g X-Received: by 2002:ad4:5f0f:0:b0:4b1:d1a0:8e9f with SMTP id fo15-20020ad45f0f000000b004b1d1a08e9fmr2247449qvb.52.1665026703837; Wed, 05 Oct 2022 20:25:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026703; cv=none; d=google.com; s=arc-20160816; b=a51GS7F82NrszDKkzTOSslkLzk1qV8cyyq3qh+1Rf3eDFF0Z+D33JVP1OlXoQj+woJ lrvOBAg7JjdWz/PrsTTk7GKZOYfyKJJ+g8C1M+5MNwGKnoR1bZn1CO4hz41fU7LjIO+/ rPdhxCM2UurANEPyMb5anYccnD6/E1gWcTm+K7jgqLJjYo14cMEiMhsBnCZ+OnsskvxE /kF3QjgH27gRIvluIvvtAAlWMKbGdZ8ds8JXOVKmHkUdwykrTQ/vyCb1Z9nguFdrmrcm O20fLVknit7Oghg/Bar3Suhc1XP0h63fXZ/8OHIrfuoqzoJ+uz2Cp/fonwdmuDny9MPu aAGg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=dHdAexTxcvEANXE2xc0nlvNki+DaezbfRQXE+k/TXUs=; b=pT5sed/sUiGb9KHpizLueD2DItlMNfIUreld9NJ/U0hL6iPNiOEgq4o+PRJlQqKNYA CF265jlYlGd9bHgS35qXbQTZF6+aHDrWx72juTwNMMXctn945c9VxMwlywasib8b3F4U W+PqA2hQjdbjGx6agx9apVqLJd0nuuyKblwlYsTJFwCGLPmZ2mm4KXASTozhuOSWnOBT zBa+MmKPYixFqvjbvdxzmjqPxpPkzdE26nfgCJqz4SZfEpF02AuhqAWpCuH0jmU/VjDR kjqaysgeh4dFT0gYW9NE5WeKmaP8OVZecw93FT6OoY9GfHJdAs0oro+0/eRNnkXRXI5J JD4w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=D5xP7qEF; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id u11-20020a05620a0c4b00b006cd7fc77233si7524276qki.276.2022.10.05.20.25.03 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:25:03 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=D5xP7qEF; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:36794 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHVP-0002Ar-BE for patch@linaro.org; Wed, 05 Oct 2022 23:25:03 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:48136) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIN-0005qQ-H5 for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:35 -0400 Received: from mail-pf1-x435.google.com ([2607:f8b0:4864:20::435]:44952) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIL-0006xx-Lb for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:35 -0400 Received: by mail-pf1-x435.google.com with SMTP id v186so835333pfv.11 for ; Wed, 05 Oct 2022 20:11:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=dHdAexTxcvEANXE2xc0nlvNki+DaezbfRQXE+k/TXUs=; b=D5xP7qEFHoVr4nmcRoJAqJfl1NFpk0/DEXiw0sghThEWDxNfFbglKHnvUSsj4CsKDF p1rAkvFPGPvQJy2H+V8X0y+XXEuF/+nfytXbTkQzivKHADoTRLhozANjRQkwLJe0c95y MQ5GYHwGzIDMG8smdVRcejTa6zrS0zVZP8nlZf3xanxQvPmyT1hz5OASBnQ6K4fZjJoH 5Qmd7or63GmDYfWWwuHxBlGi7b6rBiUyQ1gbvYt2jdeEa4c3BNTMWYySDZr/gKOzr7Sm 8M9nzqnMSuYa0VF9hluF/AkWKa4dcv1c+atBMqf9j18g4UkFM04ddiPibcalN482t8oe C/2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=dHdAexTxcvEANXE2xc0nlvNki+DaezbfRQXE+k/TXUs=; b=Lvqat8El89yBasY5fjFG8igbNjOxoyfcgE4VZZdMgrKih2w3P/KIR4k59kWkm8D/4t r0+tqsFWHHxsX5qWdEQfLMEB2/k4sRUQkylV1WMggtaHQHB6a0SEEPK7IvAf930XylfD XWUjgXsoVMMYK8VhJkhlrzc+Aoje/U2USRVjgNgO9sHCcMfpn0x7goSvfgj8YoeIOd18 QgHO904P/lEyrf3zVCm1UrYwaE4qQlQgFVqxkiDDzVvctiVlTHBQmkO3e6+lKddPTG0U YemVVK8dXyS8US6ksDhQepL+8lh845VTce0NL6txP3BOmWfpNF7niGKnktvI4pFsQ3x5 WZgA== X-Gm-Message-State: ACrzQf1btTlL+gqf/D/Civki/iaIE5mUYXNYY/kZSkE9iBlMNJOoNv7k NmO2veAVbfVsFSTWtWLr57AD9ECTGMYucg== X-Received: by 2002:a05:6a00:4106:b0:548:9e0e:f13b with SMTP id bu6-20020a056a00410600b005489e0ef13bmr3002033pfb.0.1665025892871; Wed, 05 Oct 2022 20:11:32 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:32 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 14/24] accel/tcg: Call tb_invalidate_phys_page for PAGE_RESET Date: Wed, 5 Oct 2022 20:11:03 -0700 Message-Id: <20221006031113.1139454-15-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::435; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" When PAGE_RESET is set, we are replacing pages with new content, which means that we need to invalidate existing cached data, such as TranslationBlocks. Perform the reset invalidate while we're doing other invalidates, which allows us to remove the separate invalidates from the user-only mmap/munmap/mprotect routines. In addition, restrict invalidation to PAGE_EXEC pages. Since cdf713085131, we have validated PAGE_EXEC is present before translation, which means we can assume that if the bit is not present, there are no translations to invalidate. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/translate-all.c | 19 +++++++++++-------- bsd-user/mmap.c | 2 -- linux-user/mmap.c | 4 ---- 3 files changed, 11 insertions(+), 14 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 8d5233fa9e..478301f227 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1352,7 +1352,7 @@ int page_get_flags(target_ulong address) void page_set_flags(target_ulong start, target_ulong end, int flags) { target_ulong addr, len; - bool reset_target_data; + bool reset; /* This function should never be called with addresses outside the guest address space. If this assert fires, it probably indicates @@ -1369,7 +1369,7 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) if (flags & PAGE_WRITE) { flags |= PAGE_WRITE_ORG; } - reset_target_data = !(flags & PAGE_VALID) || (flags & PAGE_RESET); + reset = !(flags & PAGE_VALID) || (flags & PAGE_RESET); flags &= ~PAGE_RESET; for (addr = start, len = end - start; @@ -1377,14 +1377,17 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, true); - /* If the write protection bit is set, then we invalidate - the code inside. */ - if (!(p->flags & PAGE_WRITE) && - (flags & PAGE_WRITE) && - p->first_tb) { + /* + * If the page was executable, but is reset, or is no longer + * executable, or has become writable, then invalidate any code. + */ + if ((p->flags & PAGE_EXEC) + && (reset || + !(flags & PAGE_EXEC) || + (flags & ~p->flags & PAGE_WRITE))) { tb_invalidate_phys_page(addr); } - if (reset_target_data) { + if (reset) { g_free(p->target_data); p->target_data = NULL; p->flags = flags; diff --git a/bsd-user/mmap.c b/bsd-user/mmap.c index e54e26de17..d6c5a344c9 100644 --- a/bsd-user/mmap.c +++ b/bsd-user/mmap.c @@ -663,7 +663,6 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot, page_dump(stdout); printf("\n"); #endif - tb_invalidate_phys_range(start, start + len); mmap_unlock(); return start; fail: @@ -769,7 +768,6 @@ int target_munmap(abi_ulong start, abi_ulong len) if (ret == 0) { page_set_flags(start, start + len, 0); - tb_invalidate_phys_range(start, start + len); } mmap_unlock(); return ret; diff --git a/linux-user/mmap.c b/linux-user/mmap.c index 28f3bc85ed..10f5079331 100644 --- a/linux-user/mmap.c +++ b/linux-user/mmap.c @@ -182,7 +182,6 @@ int target_mprotect(abi_ulong start, abi_ulong len, int target_prot) } page_set_flags(start, start + len, page_flags); - tb_invalidate_phys_range(start, start + len); ret = 0; error: @@ -662,7 +661,6 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int target_prot, qemu_log_unlock(f); } } - tb_invalidate_phys_range(start, start + len); mmap_unlock(); return start; fail: @@ -766,7 +764,6 @@ int target_munmap(abi_ulong start, abi_ulong len) if (ret == 0) { page_set_flags(start, start + len, 0); - tb_invalidate_phys_range(start, start + len); } mmap_unlock(); return ret; @@ -856,7 +853,6 @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size, page_set_flags(new_addr, new_addr + new_size, prot | PAGE_VALID | PAGE_RESET); } - tb_invalidate_phys_range(new_addr, new_addr + new_size); mmap_unlock(); return new_addr; } From patchwork Thu Oct 6 03:11:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612862 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1197130pvb; Wed, 5 Oct 2022 20:32:30 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7IEGfG2sLkF5VmZdvfmQGSncvJJWPCJ8Rzwov1s3znpVwAYSAFNPlHVIjwyQd3GYgXpm8O X-Received: by 2002:a05:622a:190f:b0:35c:c1ef:d2d0 with SMTP id w15-20020a05622a190f00b0035cc1efd2d0mr2029904qtc.469.1665027150176; Wed, 05 Oct 2022 20:32:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665027150; cv=none; d=google.com; s=arc-20160816; b=B9unRZ+KuofZZjPhmLTG0QW2LUg6lQhZqcsM8J/7FNKkhGVeSZ1UP4F/+ikBJ8ceEd ocGK6Z2r4V9BzrhpM4IRvBgwDD5oj+KeXz2DPAFvLZRs4fclf54h67MeOghJAGW59eSz gu2k8+9Ot1FKLObaE6qV25QYB7H2B0CXaNJaUuef2LK72psin070+j5Xh98A7b6qNcN/ viU0LkxRSMvSzHJH2yFmqUnt5qM06260NvInMOmgSEEW/i6nhVybd5oGNxT2jbVI16OF UkY7Qfzc4R4MRlouHf/SJQivzo0rGc7T75oLYfUItxUGGFdQoXtSoNc25pC6qelCgdb4 brBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=+SrhGhceNC8Ukvgg584MWmHtEZJTffcBA4IFlzSeepY=; b=KGMkmtOuTQunc/V5OFO3e7qvwMTofOYy42g0aiDaQlGwnHUGXJVY46aXeOoytqKZ3t J7RtbFMNR/YGSCuf8cEdAzE2My6qp5sSihtBNJ41j61rjb85Z3LLP4KSI3bq4EsYoMZY 6OIFPcv3qIFKN5D1j6sA9hc35W1nVKqK8jygiRCbqErJthXSI/7H82X5g7rZWj+q/R0K SU3wmgev+Zst6zsEGGCM6BUPKCc9ZLmPW794gYK10V2TglvvDikUNxMlG2BqyDDOTvjc XZIFD4mf7Pym+PczGqeExOgEaJKtD/zKGDLW+P77sVRWLDq5qHLtFse9/un0x+/WD8Ji kHow== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hF+prfVw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t20-20020a37ea14000000b006ce4100f7fasi7053346qkj.43.2022.10.05.20.32.30 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:32:30 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hF+prfVw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:39996 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHcb-0003Uf-Ml for patch@linaro.org; Wed, 05 Oct 2022 23:32:29 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42008) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIQ-0005wp-3J for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:38 -0400 Received: from mail-pg1-x530.google.com ([2607:f8b0:4864:20::530]:41514) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIN-0006wa-6Z for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:37 -0400 Received: by mail-pg1-x530.google.com with SMTP id q9so695653pgq.8 for ; Wed, 05 Oct 2022 20:11:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=+SrhGhceNC8Ukvgg584MWmHtEZJTffcBA4IFlzSeepY=; b=hF+prfVwHblIjSTX/u1AkhCYVlPHb90pMlb0MqPwAW4IVRpS1WN8MspAwe5rYlXvHk P1BEGQ9JKlfskkU3/Aw9DZtuRuX8yzy7iYs/yZ/CWmKumPGL1/rbiATiW5gIkNEa+Xku TKweccTTHbUNoUZy0CVCICl2qG6xuFBMx+Zhlk9i9MbIPpUsA/x2lFckyK1DNKNepMLQ 79n7ijgIUdnZmQhRtiD3A6iqgsasDf0+KnaxzMiqnQ4VdwJ8A2Pn3QLyuEaq/lGWcgN7 +kfu5WoboYRgd//L/jHCN0BE85xqJd2pvSh+lps1kmlODfp+3kX7yltB3VEHL62jyV9B 5wMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=+SrhGhceNC8Ukvgg584MWmHtEZJTffcBA4IFlzSeepY=; b=N7EmEHhnhCrQXNoDLCvH/QfGTtm94cUSMmofhzTMYrs5hIk+w5owAOUBmIeD5Ryza6 dvrN2LVHZOpOfSRf+zmxaLOQsmo6TRGXTSa8gloQkuCi/ga3t7ZyUtaqY62A5+aZSZil bic90PQhMhWILMw7i5epPxskkLSAB8gxW5XOOekwf66Vr3uBDYHGhoFz2YoD39cpOlim o/lZM2Lll27b7FR13VxwpS8JdB53uCIKKZ77GUTpJ7MVDYdUlGvJ5rlNM1WLUrE/MFaT ylygzpeA/nq/W78M1BMu/c/m98qzQBTFhb9dDrjuwuxe7WeD5iGP7qydlVDQFlpeOaPC gDzg== X-Gm-Message-State: ACrzQf0GcZtDBbjdl2AHxOuXEsezM0RVKFsLMu7189V5TZ6Mnf32LhtI 5A7KqZDbCLyCtfSdhTojjscj15DJZq7b1Q== X-Received: by 2002:a63:590a:0:b0:439:6e0c:6381 with SMTP id n10-20020a63590a000000b004396e0c6381mr2532658pgb.141.1665025894156; Wed, 05 Oct 2022 20:11:34 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:33 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 15/24] accel/tcg: Use interval tree for TBs in user-only mode Date: Wed, 5 Oct 2022 20:11:04 -0700 Message-Id: <20221006031113.1139454-16-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::530; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x530.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Begin weaning user-only away from PageDesc. Since, for user-only, all TB (and page) manipulation is done with a single mutex, and there is no virtual/physical discontinuity to split a TB across discontinuous pages, place all of the TBs into a single IntervalTree. This makes it trivial to find all of the TBs intersecting a range. Retain the existing PageDesc + linked list implementation for system mode. Move the portion of the implementation that overlaps the new user-only code behind the common ifdef. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/internal.h | 16 +- include/exec/exec-all.h | 43 ++++- accel/tcg/tb-maint.c | 388 ++++++++++++++++++++++---------------- accel/tcg/translate-all.c | 4 +- 4 files changed, 280 insertions(+), 171 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 1227bb69bd..1bd5a02911 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -24,14 +24,13 @@ #endif typedef struct PageDesc { - /* list of TBs intersecting this ram page */ - uintptr_t first_tb; #ifdef CONFIG_USER_ONLY unsigned long flags; void *target_data; -#endif -#ifdef CONFIG_SOFTMMU +#else QemuSpin lock; + /* list of TBs intersecting this ram page */ + uintptr_t first_tb; #endif } PageDesc; @@ -69,9 +68,6 @@ static inline PageDesc *page_find(tb_page_addr_t index) tb; tb = (TranslationBlock *)tb->field[n], n = (uintptr_t)tb & 1, \ tb = (TranslationBlock *)((uintptr_t)tb & ~1)) -#define PAGE_FOR_EACH_TB(pagedesc, tb, n) \ - TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) - #define TB_FOR_EACH_JMP(head_tb, tb, n) \ TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) @@ -89,6 +85,12 @@ void do_assert_page_locked(const PageDesc *pd, const char *file, int line); #endif void page_lock(PageDesc *pd); void page_unlock(PageDesc *pd); + +/* TODO: For now, still shared with translate-all.c for system mode. */ +typedef int PageForEachNext; +#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + #endif #if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) void assert_no_pages_locked(void); diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h index 5ae484e34d..793ef5ba4f 100644 --- a/include/exec/exec-all.h +++ b/include/exec/exec-all.h @@ -24,6 +24,7 @@ #ifdef CONFIG_TCG #include "exec/cpu_ldst.h" #endif +#include "qemu/interval-tree.h" /* allow to see translation results - the slowdown should be negligible, so we leave it */ #define DEBUG_DISAS @@ -552,11 +553,20 @@ struct TranslationBlock { struct tb_tc tc; - /* first and second physical page containing code. The lower bit - of the pointer tells the index in page_next[]. - The list is protected by the TB's page('s) lock(s) */ + /* + * Track tb_page_addr_t intervals that intersect this TB. + * For user-only, the virtual addresses are always contiguous, + * and we use a unified interval tree. For system, we use a + * linked list headed in each PageDesc. Within the list, the lsb + * of the previous pointer tells the index of page_next[], and the + * list is protected by the PageDesc lock(s). + */ +#ifdef CONFIG_USER_ONLY + IntervalTreeNode itree; +#else uintptr_t page_next[2]; tb_page_addr_t page_addr[2]; +#endif /* jmp_lock placed here to fill a 4-byte hole. Its documentation is below */ QemuSpin jmp_lock; @@ -612,24 +622,51 @@ static inline uint32_t tb_cflags(const TranslationBlock *tb) static inline tb_page_addr_t tb_page_addr0(const TranslationBlock *tb) { +#ifdef CONFIG_USER_ONLY + return tb->itree.start; +#else return tb->page_addr[0]; +#endif } static inline tb_page_addr_t tb_page_addr1(const TranslationBlock *tb) { +#ifdef CONFIG_USER_ONLY + tb_page_addr_t next = tb->itree.last & TARGET_PAGE_MASK; + return next == (tb->itree.start & TARGET_PAGE_MASK) ? -1 : next; +#else return tb->page_addr[1]; +#endif } static inline void tb_set_page_addr0(TranslationBlock *tb, tb_page_addr_t addr) { +#ifdef CONFIG_USER_ONLY + tb->itree.start = addr; + /* + * To begin, we record an interval of one byte. When the translation + * loop encounters a second page, the interval will be extended to + * include the first byte of the second page, which is sufficient to + * allow tb_page_addr1() above to work properly. The final corrected + * interval will be set by tb_page_add() from tb->size before the + * node is added to the interval tree. + */ + tb->itree.last = addr; +#else tb->page_addr[0] = addr; +#endif } static inline void tb_set_page_addr1(TranslationBlock *tb, tb_page_addr_t addr) { +#ifdef CONFIG_USER_ONLY + /* Extend the interval to the first byte of the second page. See above. */ + tb->itree.last = addr; +#else tb->page_addr[1] = addr; +#endif } /* current cflags for hashing/comparison */ diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index c8e921089d..14e8e47a6a 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -18,6 +18,7 @@ */ #include "qemu/osdep.h" +#include "qemu/interval-tree.h" #include "exec/cputlb.h" #include "exec/log.h" #include "exec/exec-all.h" @@ -50,6 +51,75 @@ void tb_htable_init(void) qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); } +#ifdef CONFIG_USER_ONLY +/* + * For user-only, since we are protecting all of memory with a single lock, + * and because the two pages of a TranslationBlock are always contiguous, + * use a single data structure to record all TranslationBlocks. + */ +static IntervalTreeRoot tb_root; + +static void page_flush_tb(void) +{ + assert_memory_lock(); + memset(&tb_root, 0, sizeof(tb_root)); +} + +/* Call with mmap_lock held. */ +static void tb_page_add(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) +{ + /* translator_loop() must have made all TB pages non-writable */ + assert(!(p1->flags & PAGE_WRITE)); + if (p2) { + assert(!(p2->flags & PAGE_WRITE)); + } + + assert_memory_lock(); + + tb->itree.last = tb->itree.start + tb->size - 1; + interval_tree_insert(&tb->itree, &tb_root); +} + +/* Call with mmap_lock held. */ +static void tb_page_remove(TranslationBlock *tb) +{ + assert_memory_lock(); + interval_tree_remove(&tb->itree, &tb_root); +} + +/* TODO: For now, still shared with translate-all.c for system mode. */ +#define PAGE_FOR_EACH_TB(start, end, pagedesc, T, N) \ + for (T = foreach_tb_first(start, end), \ + N = foreach_tb_next(T, start, end); \ + T != NULL; \ + T = N, N = foreach_tb_next(N, start, end)) + +typedef TranslationBlock *PageForEachNext; + +static PageForEachNext foreach_tb_first(tb_page_addr_t start, + tb_page_addr_t end) +{ + IntervalTreeNode *n = interval_tree_iter_first(&tb_root, start, end - 1); + return n ? container_of(n, TranslationBlock, itree) : NULL; +} + +static PageForEachNext foreach_tb_next(PageForEachNext tb, + tb_page_addr_t start, + tb_page_addr_t end) +{ + IntervalTreeNode *n; + + if (tb) { + n = interval_tree_iter_next(&tb->itree, start, end - 1); + if (n) { + return container_of(n, TranslationBlock, itree); + } + } + return NULL; +} + +#else + /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void page_flush_tb_1(int level, void **lp) { @@ -84,6 +154,70 @@ static void page_flush_tb(void) } } +/* + * Add the tb in the target page and protect it if necessary. + * Called with @p->lock held. + */ +static void tb_page_add(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) +{ + /* + * If some code is already present, then the pages are already + * protected. So we handle the case where only the first TB is + * allocated in a physical page. + */ + assert_page_locked(p1); + if (p1->first_tb) { + tb->page_next[0] = p1->first_tb; + } else { + tlb_protect_code(tb->page_addr[0] & TARGET_PAGE_MASK); + tb->page_next[0] = 0; + } + p1->first_tb = (uintptr_t)tb | 0; + + if (unlikely(p2)) { + assert_page_locked(p2); + if (p2->first_tb) { + tb->page_next[1] = p2->first_tb; + } else { + tlb_protect_code(tb->page_addr[1] & TARGET_PAGE_MASK); + tb->page_next[1] = 0; + } + p2->first_tb = (uintptr_t)tb | 1; + } +} + +static void tb_page_remove1(TranslationBlock *tb, PageDesc *pd) +{ + TranslationBlock *i; + PageForEachNext n; + uintptr_t *pprev; + + assert_page_locked(pd); + pprev = &pd->first_tb; + PAGE_FOR_EACH_TB(unused, unused, pd, i, n) { + if (i == tb) { + *pprev = i->page_next[n]; + return; + } + pprev = &i->page_next[n]; + } + g_assert_not_reached(); +} + +static void tb_page_remove(TranslationBlock *tb) +{ + PageDesc *pd; + + pd = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS); + tb_page_remove1(tb, pd); + if (unlikely(tb->page_addr[1] != -1)) { + pd = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS); + tb_page_remove1(tb, pd); + } +} + +#endif + /* flush all the translation blocks */ static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count) { @@ -128,28 +262,6 @@ void tb_flush(CPUState *cpu) } } -/* - * user-mode: call with mmap_lock held - * !user-mode: call with @pd->lock held - */ -static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb) -{ - TranslationBlock *tb1; - uintptr_t *pprev; - unsigned int n1; - - assert_page_locked(pd); - pprev = &pd->first_tb; - PAGE_FOR_EACH_TB(pd, tb1, n1) { - if (tb1 == tb) { - *pprev = tb1->page_next[n1]; - return; - } - pprev = &tb1->page_next[n1]; - } - g_assert_not_reached(); -} - /* remove @orig from its @n_orig-th jump list */ static inline void tb_remove_from_jmp_list(TranslationBlock *orig, int n_orig) { @@ -255,7 +367,6 @@ static void tb_jmp_cache_inval_tb(TranslationBlock *tb) */ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list) { - PageDesc *p; uint32_t h; tb_page_addr_t phys_pc; uint32_t orig_cflags = tb_cflags(tb); @@ -277,13 +388,7 @@ static void do_tb_phys_invalidate(TranslationBlock *tb, bool rm_from_page_list) /* remove the TB from the page list */ if (rm_from_page_list) { - p = page_find(phys_pc >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - phys_pc = tb_page_addr1(tb); - if (phys_pc != -1) { - p = page_find(phys_pc >> TARGET_PAGE_BITS); - tb_page_remove(p, tb); - } + tb_page_remove(tb); } /* remove the TB from the hash list */ @@ -387,41 +492,6 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr) } } -/* - * Add the tb in the target page and protect it if necessary. - * Called with mmap_lock held for user-mode emulation. - * Called with @p->lock held in !user-mode. - */ -static inline void tb_page_add(PageDesc *p, TranslationBlock *tb, - unsigned int n, tb_page_addr_t page_addr) -{ -#ifndef CONFIG_USER_ONLY - bool page_already_protected; -#endif - - assert_page_locked(p); - - tb->page_next[n] = p->first_tb; -#ifndef CONFIG_USER_ONLY - page_already_protected = p->first_tb != (uintptr_t)NULL; -#endif - p->first_tb = (uintptr_t)tb | n; - -#if defined(CONFIG_USER_ONLY) - /* translator_loop() must have made all TB pages non-writable */ - assert(!(p->flags & PAGE_WRITE)); -#else - /* - * If some code is already present, then the pages are already - * protected. So we handle the case where only the first TB is - * allocated in a physical page. - */ - if (!page_already_protected) { - tlb_protect_code(page_addr); - } -#endif -} - /* * Add a new TB and link it to the physical page tables. phys_page2 is * (-1) to indicate that only one page contains the TB. @@ -453,10 +523,7 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, * we can only insert TBs that are fully initialized. */ page_lock_pair(&p, phys_pc, &p2, phys_page2, true); - tb_page_add(p, tb, 0, phys_pc); - if (p2) { - tb_page_add(p2, tb, 1, phys_page2); - } + tb_page_add(tb, p, p2); /* add in the hash table */ h = tb_hash_func(phys_pc, (TARGET_TB_PCREL ? 0 : tb_pc(tb)), @@ -465,10 +532,7 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, /* remove TB from the page(s) if we couldn't insert it */ if (unlikely(existing_tb)) { - tb_page_remove(p, tb); - if (p2) { - tb_page_remove(p2, tb); - } + tb_page_remove(tb); tb = existing_tb; } @@ -479,6 +543,87 @@ TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc, return tb; } +#ifdef CONFIG_USER_ONLY +/* + * Invalidate all TBs which intersect with the target address range. + * Called with mmap_lock held for user-mode emulation. + * NOTE: this function must not be called while a TB is running. + */ +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) +{ + TranslationBlock *tb; + PageForEachNext n; + + assert_memory_lock(); + + PAGE_FOR_EACH_TB(start, end, unused, tb, n) { + tb_phys_invalidate__locked(tb); + } +} + +/* + * Invalidate all TBs which intersect with the target address page @addr. + * Called with mmap_lock held for user-mode emulation + * NOTE: this function must not be called while a TB is running. + */ +void tb_invalidate_phys_page(tb_page_addr_t addr) +{ + tb_page_addr_t start, end; + + start = addr & TARGET_PAGE_MASK; + end = start + TARGET_PAGE_SIZE; + tb_invalidate_phys_range(start, end); +} + +/* + * Called with mmap_lock held. If pc is not 0 then it indicates the + * host PC of the faulting store instruction that caused this invalidate. + * Returns true if the caller needs to abort execution of the current + * TB (because it was modified by this store and the guest CPU has + * precise-SMC semantics). + */ +bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) +{ + assert(pc != 0); +#ifdef TARGET_HAS_PRECISE_SMC + assert_memory_lock(); + { + TranslationBlock *current_tb = tcg_tb_lookup(pc); + bool current_tb_modified = false; + TranslationBlock *tb; + PageForEachNext n; + + addr &= TARGET_PAGE_MASK; + + PAGE_FOR_EACH_TB(addr, addr + TARGET_PAGE_SIZE, unused, tb, n) { + if (current_tb == tb && + (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) { + /* + * If we are modifying the current TB, we must stop its + * execution. We could be more precise by checking that + * the modification is after the current PC, but it would + * require a specialized function to partially restore + * the CPU state. + */ + current_tb_modified = true; + cpu_restore_state_from_tb(current_cpu, current_tb, pc, true); + } + tb_phys_invalidate__locked(tb); + } + + if (current_tb_modified) { + /* Force execution of one insn next time. */ + CPUState *cpu = current_cpu; + cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu); + return true; + } + } +#else + tb_invalidate_phys_page(addr); +#endif /* TARGET_HAS_PRECISE_SMC */ + return false; +} +#else /* * @p must be non-NULL. * user-mode: call with mmap_lock held. @@ -492,22 +637,17 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, { TranslationBlock *tb; tb_page_addr_t tb_start, tb_end; - int n; + PageForEachNext n; #ifdef TARGET_HAS_PRECISE_SMC - CPUState *cpu = current_cpu; - bool current_tb_not_found = retaddr != 0; bool current_tb_modified = false; - TranslationBlock *current_tb = NULL; + TranslationBlock *current_tb = retaddr ? tcg_tb_lookup(retaddr) : NULL; #endif /* TARGET_HAS_PRECISE_SMC */ - assert_page_locked(p); - /* * We remove all the TBs in the range [start, end[. * XXX: see if in some cases it could be faster to invalidate all the code */ - PAGE_FOR_EACH_TB(p, tb, n) { - assert_page_locked(p); + PAGE_FOR_EACH_TB(start, end, p, tb, n) { /* NOTE: this is subtle as a TB may span two physical pages */ if (n == 0) { /* NOTE: tb_end may be after the end of the page, but @@ -521,11 +661,6 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, } if (!(tb_end <= start || tb_start >= end)) { #ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_not_found) { - current_tb_not_found = false; - /* now we have a real cpu fault */ - current_tb = tcg_tb_lookup(retaddr); - } if (current_tb == tb && (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) { /* @@ -536,25 +671,26 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, * restore the CPU state. */ current_tb_modified = true; - cpu_restore_state_from_tb(cpu, current_tb, retaddr, true); + cpu_restore_state_from_tb(current_cpu, current_tb, + retaddr, true); } #endif /* TARGET_HAS_PRECISE_SMC */ tb_phys_invalidate__locked(tb); } } -#if !defined(CONFIG_USER_ONLY) + /* if no code remaining, no need to continue to use slow writes */ if (!p->first_tb) { tlb_unprotect_code(start); } -#endif + #ifdef TARGET_HAS_PRECISE_SMC if (current_tb_modified) { page_collection_unlock(pages); /* Force execution of one insn next time. */ - cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu); + current_cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu); mmap_unlock(); - cpu_loop_exit_noexc(cpu); + cpu_loop_exit_noexc(current_cpu); } #endif } @@ -571,8 +707,6 @@ void tb_invalidate_phys_page(tb_page_addr_t addr) tb_page_addr_t start, end; PageDesc *p; - assert_memory_lock(); - p = page_find(addr >> TARGET_PAGE_BITS); if (p == NULL) { return; @@ -599,8 +733,6 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) struct page_collection *pages; tb_page_addr_t next; - assert_memory_lock(); - pages = page_collection_lock(start, end); for (next = (start & TARGET_PAGE_MASK) + TARGET_PAGE_SIZE; start < end; @@ -611,12 +743,12 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) if (pd == NULL) { continue; } + assert_page_locked(pd); tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0); } page_collection_unlock(pages); } -#ifdef CONFIG_SOFTMMU /* * len must be <= 8 and start must be a multiple of len. * Called via softmmu_template.h when code areas are written to with @@ -630,8 +762,6 @@ void tb_invalidate_phys_page_fast(struct page_collection *pages, { PageDesc *p; - assert_memory_lock(); - p = page_find(start >> TARGET_PAGE_BITS); if (!p) { return; @@ -641,64 +771,4 @@ void tb_invalidate_phys_page_fast(struct page_collection *pages, tb_invalidate_phys_page_range__locked(pages, p, start, start + len, retaddr); } -#else -/* - * Called with mmap_lock held. If pc is not 0 then it indicates the - * host PC of the faulting store instruction that caused this invalidate. - * Returns true if the caller needs to abort execution of the current - * TB (because it was modified by this store and the guest CPU has - * precise-SMC semantics). - */ -bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) -{ - TranslationBlock *tb; - PageDesc *p; - int n; -#ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb = NULL; - CPUState *cpu = current_cpu; - bool current_tb_modified = false; -#endif - - assert_memory_lock(); - - addr &= TARGET_PAGE_MASK; - p = page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - return false; - } - -#ifdef TARGET_HAS_PRECISE_SMC - if (p->first_tb && pc != 0) { - current_tb = tcg_tb_lookup(pc); - } -#endif - assert_page_locked(p); - PAGE_FOR_EACH_TB(p, tb, n) { -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb == tb && - (tb_cflags(current_tb) & CF_COUNT_MASK) != 1) { - /* - * If we are modifying the current TB, we must stop its execution. - * We could be more precise by checking that the modification is - * after the current PC, but it would require a specialized - * function to partially restore the CPU state. - */ - current_tb_modified = true; - cpu_restore_state_from_tb(cpu, current_tb, pc, true); - } -#endif /* TARGET_HAS_PRECISE_SMC */ - tb_phys_invalidate(tb, addr); - } - p->first_tb = (uintptr_t)NULL; -#ifdef TARGET_HAS_PRECISE_SMC - if (current_tb_modified) { - /* Force execution of one insn next time. */ - cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(cpu); - return true; - } -#endif - - return false; -} -#endif +#endif /* CONFIG_USER_ONLY */ diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 478301f227..e002981a9f 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -686,7 +686,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) for (index = start; index <= end; index++) { TranslationBlock *tb; - int n; + PageForEachNext n; pd = page_find(index); if (pd == NULL) { @@ -697,7 +697,7 @@ page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) goto retry; } assert_page_locked(pd); - PAGE_FOR_EACH_TB(pd, tb, n) { + PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { if (page_trylock_add(set, tb_page_addr0(tb)) || (tb_page_addr1(tb) != -1 && page_trylock_add(set, tb_page_addr1(tb)))) { From patchwork Thu Oct 6 03:11:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612844 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1191073pvb; Wed, 5 Oct 2022 20:15:32 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4E9pWq49fsToiFUsnG1s8I0aOB1gQ7SwmODOn0RfvEsjzwy84DjfwCYoa1UatmFSWyGhxt X-Received: by 2002:ac8:5ac7:0:b0:35b:b658:4b6c with SMTP id d7-20020ac85ac7000000b0035bb6584b6cmr1942685qtd.207.1665026132754; Wed, 05 Oct 2022 20:15:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026132; cv=none; d=google.com; s=arc-20160816; b=Y1KEqWqyE8/ZIO0Rqo1JRb3D3mfnmNWFoL6TNlJwX8nOFvuiVChU398jThUtSXrvzD /2WIP/G3sSilhiokI5V3GczqUu06yvfq/GWxHbUmV2FaJ262tZC5LdY8wNdGTcHdyDxn zbpYWMiVEuNzR/+FHIZ8MLCwefzrNhxUm5F5cRAEw2eWTTObGPzzltqdvYwEoy2m9wBj 8Hr1b8xXTXrdA507Bur/9gD8Xx5w5RbgwlHpbFRCwMx0XIyN/dQMxHyi5QgI7m0qcCd3 RTQ/RpvG8vf6Fa+t2H358smeNZBm0m+i+tk6Hy/xM5FlWAH3tjodWOZVX44Ys1E9ygnt pvQg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=OimgKPWxXttsQO5Dv6OTe57j981fAqdAef8EXWzK2aE=; b=XUwaSC3Z4tLFUYoH6u0dNAxbV4JnsXs7KrnQ47FYDLsQzNAD15bEgA1PZwB0VbdDGf vpvIl0fGJa87zUrXBSOKYn38LlYimtSLO+cVN1Zf0wsrcZFpLJ2V9yQw0j29F/2zUW22 dL+H7+B8p1I6/B5xMiNdNgfxCR6MfgKwf/uVPvIhRajxHaC/+jdieZaA06j9FTfqHHMm HrAmph/V1+HTwMgBopkOeOQ4beAJRgED45RbwiB6diyE7z7Qi1NOtdTbnW1qX41pmKyQ SFXccuc9EbAj53Dd/MF9ZgA7s/GI3cT5G6E/teHzW3XOYXJbxRTWmjNVhO4s/EuWFBdm hZAg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=d7UW0XEM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t3-20020a05622a148300b0035d4775edc2si7555396qtx.526.2022.10.05.20.15.32 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:15:32 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=d7UW0XEM; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59410 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHMC-0002gY-7p for patch@linaro.org; Wed, 05 Oct 2022 23:15:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42010) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIQ-0005xO-7s for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:38 -0400 Received: from mail-pl1-x62d.google.com ([2607:f8b0:4864:20::62d]:34484) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIO-0006zz-Kt for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:37 -0400 Received: by mail-pl1-x62d.google.com with SMTP id n7so564520plp.1 for ; Wed, 05 Oct 2022 20:11:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=OimgKPWxXttsQO5Dv6OTe57j981fAqdAef8EXWzK2aE=; b=d7UW0XEMHs3SC7qdJUnCK6h/RSp0P+EEVn9Du/CRFJe6FJzVBpaGiBZzeBRFwIGUNX 8eGXxIafTmb/Es8/NL0WhBscTXKw23arfvj/RjkdWmMTtiTEmXYqmJ1K+5MkAmFkGq+7 5irLhC+PACYWvzFHT2XnyUCESad3XIH/xh8iASE8kiA6a+Qhq1u4hOM9Wognm59QjZ4t km5bzTNNBF2KXq7EbloMp7tTABS2bRgJ6EKGLxZ9zR+frLcJGQqp7wBP8b26xC55JGL9 8gq0HhtrQrm2Pjmw0G9UhFghXtUhGjJqfxLo99T3EU2GzPcxsj4TCLDibUxLT7S+z+X7 QUWw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=OimgKPWxXttsQO5Dv6OTe57j981fAqdAef8EXWzK2aE=; b=HpzfCrXg2kINx1cN8AuyIWrOrdqoQ52O4iJI5GtO3EI4s3rCGwSCnmVwnhgyr6l/1C YFWZmeuTmFjSj5h+9UWQkFpLgoQPO4dmBSMRGug3eDcehUWOsytBLkUrGner1p5CGtwu TzsMd9sAWjNDMAxMf3CXFa68luVB+s+dtqve7AHIxUES3U6OsTZ0woLFN1RlgO4aLTCF i1VvXDvcRkv/Iyl3urpo2H5GEAyfPJTQovIv1WZbGv5hcKdhGuCNlBsqOkUnJtEkjDHD JW2zAdAFKhh3qXRuvp9pBUtmx1I1++ZCYuANQE/Opj7D+lkxR0nP409Xg8RDI7axaSYr d17A== X-Gm-Message-State: ACrzQf0DNX0EYzWWv3NUuhmivjUQdDr7mUG+vvHYMaieFRMq4EAVoy5g 08i+6vkF/A+B9IUNZTEobfrdS1xbSJDiMQ== X-Received: by 2002:a17:90b:3146:b0:20a:f817:d620 with SMTP id ip6-20020a17090b314600b0020af817d620mr3895717pjb.220.1665025895286; Wed, 05 Oct 2022 20:11:35 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:34 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 16/24] accel/tcg: Use page_reset_target_data in page_set_flags Date: Wed, 5 Oct 2022 20:11:05 -0700 Message-Id: <20221006031113.1139454-17-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62d; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Use the existing function for clearing target data. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/translate-all.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index e002981a9f..0006290694 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1370,6 +1370,9 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) flags |= PAGE_WRITE_ORG; } reset = !(flags & PAGE_VALID) || (flags & PAGE_RESET); + if (reset) { + page_reset_target_data(start, end); + } flags &= ~PAGE_RESET; for (addr = start, len = end - start; @@ -1387,14 +1390,8 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) (flags & ~p->flags & PAGE_WRITE))) { tb_invalidate_phys_page(addr); } - if (reset) { - g_free(p->target_data); - p->target_data = NULL; - p->flags = flags; - } else { - /* Using mprotect on a page does not change sticky bits. */ - p->flags = (p->flags & PAGE_STICKY) | flags; - } + /* Using mprotect on a page does not change sticky bits. */ + p->flags = (reset ? 0 : p->flags & PAGE_STICKY) | flags; } } From patchwork Thu Oct 6 03:11:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612851 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1193898pvb; Wed, 5 Oct 2022 20:23:31 -0700 (PDT) X-Google-Smtp-Source: AMsMyM7P+TGlNLk4rqQzxd4KH4REG6OpPMORYBLIU8Z1zjciEf0ohpxE61ZgJksSET4TETWBjkr1 X-Received: by 2002:a05:620a:148:b0:6e4:5d13:7d2a with SMTP id e8-20020a05620a014800b006e45d137d2amr1867602qkn.259.1665026611307; Wed, 05 Oct 2022 20:23:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026611; cv=none; d=google.com; s=arc-20160816; b=hotdFvaXN5I3g5QyvbYS91wvyUUunu+rysYiuh5DXCrGj9gnpi8ekcoM7PBvVkgk4q uSUa2GUcQG1gNUcyPNxkXeDDv/WW5s8zrPNyWmGWNGewMYK7sc4UvACrMZ7iV+YsEhUh Jw/ui5mz74Y9HEqrUMK3xHK2Jy1625A+fTv4AW2bl/JWqWl9gcP0uJmPdCIxxem0jVGr YCqosl0wBFFSTqHO71Bfz5WfI/NOKPMFxjtQqXu5kXxNFaFDUpCgce6xG2pS3CPESjeS wNf8UQa/HGj+DZk5Jk4gCr87+hBP/zG/4FQrtW0tPuUb529IfKhmHr9u+AjfPYUCHJai jKBg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=I02Pd3AEii+oy/pB87AZKJX0kpqM/XYrou+bQ1J7ZKA=; b=rjrbjAHWdyGTQEbnFtE9SeGR4SUUoS/bIpouJhyILNoVaVlfVhIn5cYg+C/UCYUya4 zR4SOujmZbLVDbj6zguCG3Da1y2kVJnn+fJeJ21PaW3JDS2WXQlqt1c9Dq/OEe6W2fq2 0/9eTwRCAav9omZyxM+FJzbpOnqrKwDQvAaoPZD5FQF132IRww956iU+eiPXsjsm/xhD zylVfqBvOBVDJTml/taseE/ovzEl0BpdQzx9Q3uZ3M09HH3ARfoK5C4hR5yHaUBGV9TF CxxcfpDI1Dg4QxRarnyHmN/aV4XJqLKHlWIFiyZM1jMZjX03j3lhzd8LYGYjfuOe5rsc xSXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="ShEDNP/T"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y13-20020ac8524d000000b0031f0afeb4d5si6535094qtn.666.2022.10.05.20.23.31 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:23:31 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="ShEDNP/T"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:44284 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHTu-00069d-SL for patch@linaro.org; Wed, 05 Oct 2022 23:23:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42012) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIQ-0005zQ-QC for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:38 -0400 Received: from mail-pf1-x436.google.com ([2607:f8b0:4864:20::436]:43774) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIP-0006w3-7y for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:38 -0400 Received: by mail-pf1-x436.google.com with SMTP id 204so841484pfx.10 for ; Wed, 05 Oct 2022 20:11:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=I02Pd3AEii+oy/pB87AZKJX0kpqM/XYrou+bQ1J7ZKA=; b=ShEDNP/Tg7hniRioTAVxFalPZBS1odnhA4sP3ndXdU6VCp+6cIx0MpQ7X9dBk+uoSz F03MzA0LX3H1HB+6ii1NHDunNtLgI1Swmn7jzqysXmyMi4tv+VrHzUnCMRpD9sLWlpQ2 Yd5ItN0FsDNUR81LJKII8vjUBpSGv8cb74dVko6xIilT5riH8cxyTHOmvJ+/IvkEAyse MGK0VwnxtjD4D899Q15u0cdohy/GQzHWiaT4HUr7wcGiRspxT27O0UmkXE73KbMU3+Lu udlbeCQQs1l7tX0Bv3DEo7M+rZKXLlqSHlGbbDqeBoa+PZCSIyr9hrHD9ROcgdfixHgD wzBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=I02Pd3AEii+oy/pB87AZKJX0kpqM/XYrou+bQ1J7ZKA=; b=ZoAZBTOlSSnDPZM7ChIfCO/uYPKcZ1e8Ll3ozfU5KKvC9HnO6GHPuV4XiRFJvcwK3H 0BtaFBO07FMvQ+ZWpsIKwt6bj5hyb8/9OBFDac5BF3u1uu/vugediKXthpPjeelcxZVo w/r5CXnrlC/27Jt8iSXwBxVqXlnaK9mOYpbaTX36yOgnNwD2+bzGY+ZpL4buuVCgfXht KoatImyd2d0wTsKb+0MqVTjBuSOo+AKI2LGhtENyKRFkNgTK5tYS3XbxZ06s1natjmc8 tR41qFpnipOX5UerH/fgb6AFvCgb8eBH7QSN67kAebbwBjVID0aj+yqY1TosLoYXb6Qk /9AA== X-Gm-Message-State: ACrzQf1oSRVJXU+sVkcyQBGNxhyYxiQRRQrDEShXWGyss6KgVNzVieiQ LzzSdjEzaYVtWbR6OfyrY4sy3nQBf3SBMg== X-Received: by 2002:a65:6cc4:0:b0:412:35fa:5bce with SMTP id g4-20020a656cc4000000b0041235fa5bcemr2516477pgw.466.1665025896472; Wed, 05 Oct 2022 20:11:36 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:35 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 17/24] accel/tcg: Use tb_invalidate_phys_range in page_set_flags Date: Wed, 5 Oct 2022 20:11:06 -0700 Message-Id: <20221006031113.1139454-18-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::436; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x436.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Flush translation blocks in bulk, rather than page-by-page. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/translate-all.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 0006290694..04401ceac7 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1352,7 +1352,7 @@ int page_get_flags(target_ulong address) void page_set_flags(target_ulong start, target_ulong end, int flags) { target_ulong addr, len; - bool reset; + bool reset, inval_tb = false; /* This function should never be called with addresses outside the guest address space. If this assert fires, it probably indicates @@ -1388,11 +1388,15 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) && (reset || !(flags & PAGE_EXEC) || (flags & ~p->flags & PAGE_WRITE))) { - tb_invalidate_phys_page(addr); + inval_tb = true; } /* Using mprotect on a page does not change sticky bits. */ p->flags = (reset ? 0 : p->flags & PAGE_STICKY) | flags; } + + if (inval_tb) { + tb_invalidate_phys_range(start, end); + } } void page_reset_target_data(target_ulong start, target_ulong end) From patchwork Thu Oct 6 03:11:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612847 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1192790pvb; Wed, 5 Oct 2022 20:20:19 -0700 (PDT) X-Google-Smtp-Source: AMsMyM45O8Z9GDis69QAe+sRjSZri5gzG87LSCX07rjRxD/tply6sseKAYMMbQgUHSnzMsCGoxSK X-Received: by 2002:a05:620a:2b89:b0:6e2:2b6d:e2f6 with SMTP id dz9-20020a05620a2b8900b006e22b6de2f6mr1870609qkb.710.1665026418910; Wed, 05 Oct 2022 20:20:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026418; cv=none; d=google.com; s=arc-20160816; b=CPYKfo0e9Z7Yf8CtbswGI2+pVoopna5axBzWVoAex9ZCTMNs3IhPX2Cgl51iqgP5hx 6PxMUmT94PP2wNqfmdZSh7LLwUbQciWQGoKAVwcDglrSUjKYuljrylEPY0dTcQXHo10d RXFsjTD0i0TZI6aqRo5ir1M5qqRSj6VagVkHhwhgEZ9Ub7A+Ar6ugNOd3iHFyzraoJCU R2tCcW/0gL9C8l+/rOHbjPxJ7Hf+oeYklhsQ3/Rq1rdehdd/3EuINmrjCyGZVeBaMGle BjEPxoEqLM5J4IyKwfLFG+qMwMAHRqLtvCEljseo+uT0xhZHcu2RYxZjFQeR3BFlTemp sFnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=4O5Pusqh/NH2CJ/60q5/VIa/uQIj1tsm3YYoLnejOVs=; b=Zs+v8qdgq+D7a/P8725teNT3alzDYv4aW4HGxQ9Lu5zrZ59iA1Ys6UlAL4ARxmVCAV eOOfwN5Z03gS4kn91NXEaL4zAQOjQ4ZkuuOBr38GqX45mNjDkWEIghMKIlTf+NHnOUc7 aYoaVo53CVP2Nxuu7xTt3Oiy1caFUCwS7rO1oLh90VCa9FsCn0Fq8VNfX3fl8v0ewJMp 8Z67IKU8aAzmd8LDem+lZaBYBX8i49hP4S2k/Z/ZxEX/NzVVJeh3DKGHTRJ9nn0xq2/9 oWD6GB5n00szHNdmoQwARtLCdLaiTdyzr1tp3KJaMbcIWsZoNQkL9TPWtA98mM0zjDcW 7EIQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LzL661m8; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id z19-20020ac84553000000b003436edf2935si6328280qtn.717.2022.10.05.20.20.18 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:20:18 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=LzL661m8; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:52468 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHQo-00007a-A0 for patch@linaro.org; Wed, 05 Oct 2022 23:20:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42014) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIS-00063Y-M9 for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:40 -0400 Received: from mail-pf1-x42c.google.com ([2607:f8b0:4864:20::42c]:35705) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIR-00070K-0X for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:40 -0400 Received: by mail-pf1-x42c.google.com with SMTP id i6so885664pfb.2 for ; Wed, 05 Oct 2022 20:11:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=4O5Pusqh/NH2CJ/60q5/VIa/uQIj1tsm3YYoLnejOVs=; b=LzL661m8unDrk4abE8Wzx0j8KAhBvsKNgLHjy8wttFc/avaRqEy0uw7wVFy+cDz7lg E5gxDFvwI5hbD64Yh+SQRVZ6FWTrLsLOhe0yC/3ROJepVSsM5u0QLlbjlQRQFlziIeVl F9U2ch7s+LkUecPjHmbOaFr9Lmsi8gDDHsyst/5Uauaim/9DLQAL0KW9dfXgj3ZWWsn1 WXlskbpKPzMPiiZEmoOfFBVj8+pAvLW++qI9bB9/9lj+hk/1Vq2xq89Ohal+sTIj7T2t zCRca1Be5G6okTuimitt9wSM8qc3a6Qxx1HhYxLOEl4zoIgu5nQxk464Bv8mtDUiIqCL dt1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=4O5Pusqh/NH2CJ/60q5/VIa/uQIj1tsm3YYoLnejOVs=; b=7Sf1PX+oC8fzPatM9mBKbjPd5Mbg8Ax6MY6PIgyRKguWd9pxjDrJesQ6NvTmlU9aOv 9kKpVhVzUW2UHfE9L1nLZESsIHlFR6dGqtqjJJkVwNEzavS6j3H3VRQj/LIgx5JmEAVZ lG3RJt2sYV+ramUjnzzdCEfODJFfSOsHpt9M6LI0Z4ZYC4EwDgrXNFUuTi4dPJC6tfRs bmrS2glNz7akMOJRpxO+yJ4fhxQ/cuTgmMVD+wflwQG6m/EN3aXx62sEG9JPONYGOVsu zqjYmTxcAE0sKEZnPGEVTFBvrFFtor9MTti2zp3xaNiiNQ4e/OjLjU2zom4AGI9owjDX J2mw== X-Gm-Message-State: ACrzQf2cRJIAea9kA9oD0z3TUg2zLaPHS+eyqIQRb2w4PplPVqCrr9MO LBE1m62+P3D22xD8slIlMmZZ5FQ/eM7hAQ== X-Received: by 2002:aa7:8887:0:b0:561:6985:1d02 with SMTP id z7-20020aa78887000000b0056169851d02mr2939769pfe.82.1665025897691; Wed, 05 Oct 2022 20:11:37 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:37 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 18/24] accel/tcg: Move TARGET_PAGE_DATA_SIZE impl to user-exec.c Date: Wed, 5 Oct 2022 20:11:07 -0700 Message-Id: <20221006031113.1139454-19-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42c; envelope-from=richard.henderson@linaro.org; helo=mail-pf1-x42c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Since "target data" is always user-only, move it out of translate-all.c to user-exec.c. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/translate-all.c | 50 --------------------------------------- accel/tcg/user-exec.c | 50 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 50 insertions(+), 50 deletions(-) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 04401ceac7..dbd4eff0cf 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -1399,56 +1399,6 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) } } -void page_reset_target_data(target_ulong start, target_ulong end) -{ -#ifdef TARGET_PAGE_DATA_SIZE - target_ulong addr, len; - - /* - * This function should never be called with addresses outside the - * guest address space. If this assert fires, it probably indicates - * a missing call to h2g_valid. - */ - assert(end - 1 <= GUEST_ADDR_MAX); - assert(start < end); - assert_memory_lock(); - - start = start & TARGET_PAGE_MASK; - end = TARGET_PAGE_ALIGN(end); - - for (addr = start, len = end - start; - len != 0; - len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { - PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, 1); - - g_free(p->target_data); - p->target_data = NULL; - } -#endif -} - -#ifdef TARGET_PAGE_DATA_SIZE -void *page_get_target_data(target_ulong address) -{ - PageDesc *p = page_find(address >> TARGET_PAGE_BITS); - return p ? p->target_data : NULL; -} - -void *page_alloc_target_data(target_ulong address) -{ - PageDesc *p = page_find(address >> TARGET_PAGE_BITS); - void *ret = NULL; - - if (p->flags & PAGE_VALID) { - ret = p->target_data; - if (!ret) { - p->target_data = ret = g_malloc0(TARGET_PAGE_DATA_SIZE); - } - } - return ret; -} -#endif /* TARGET_PAGE_DATA_SIZE */ - int page_check_range(target_ulong start, target_ulong len, int flags) { PageDesc *p; diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 521aa8b61e..927b91900f 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -210,6 +210,56 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr, return addr; } +void page_reset_target_data(target_ulong start, target_ulong end) +{ +#ifdef TARGET_PAGE_DATA_SIZE + target_ulong addr, len; + + /* + * This function should never be called with addresses outside the + * guest address space. If this assert fires, it probably indicates + * a missing call to h2g_valid. + */ + assert(end - 1 <= GUEST_ADDR_MAX); + assert(start < end); + assert_memory_lock(); + + start = start & TARGET_PAGE_MASK; + end = TARGET_PAGE_ALIGN(end); + + for (addr = start, len = end - start; + len != 0; + len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { + PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, 1); + + g_free(p->target_data); + p->target_data = NULL; + } +#endif +} + +#ifdef TARGET_PAGE_DATA_SIZE +void *page_get_target_data(target_ulong address) +{ + PageDesc *p = page_find(address >> TARGET_PAGE_BITS); + return p ? p->target_data : NULL; +} + +void *page_alloc_target_data(target_ulong address) +{ + PageDesc *p = page_find(address >> TARGET_PAGE_BITS); + void *ret = NULL; + + if (p->flags & PAGE_VALID) { + ret = p->target_data; + if (!ret) { + p->target_data = ret = g_malloc0(TARGET_PAGE_DATA_SIZE); + } + } + return ret; +} +#endif + /* The softmmu versions of these helpers are in cputlb.c. */ /* From patchwork Thu Oct 6 03:11:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612855 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1194410pvb; Wed, 5 Oct 2022 20:25:04 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6ib6vogM314YiYicN6r2TXxW8wTQueiqLPH3Y2J5Y49iMPygTvg5UZgW0ZVnXTBJMwk9ag X-Received: by 2002:a0c:a882:0:b0:4ac:bb6c:47b6 with SMTP id x2-20020a0ca882000000b004acbb6c47b6mr2438402qva.32.1665026704629; Wed, 05 Oct 2022 20:25:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026704; cv=none; d=google.com; s=arc-20160816; b=ZllUdihZP83FcARYqsTaFM+kgHDQAT4IcMKKt3EqReQRbQr5lg6sUbh5AynpEx6J9B nT8m48u1G7gYuHRh3P2CE/47qCAnGyhsLLFDEz6Q5N6jKizBUdz+jduwYzdTeru0OU8F j0zvedc1HfvIJwCAJ/tm03cntDPOlWG6sUwktjdFnTrC6I6zGmzDszPuS1WYNp+FtbVq vrh4eJgOXVFWGgxH3n9o9i0mP+P9GKdz+9eGGcG5MJcy0/UbKGl/HsfzVxMjNsuELg5+ 25WYNLpQP+DefQ5UwTIxmIrW8rqG1rpBdxuvDuWa4qNv3dmXvG3gJinTZSTG/tKxzbG7 7KQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=0dnjfsBXDy5rRQ39KsZnlcakGwGZWrxEz9jQMseFkII=; b=sGvfeAOTu5DLIQxz/U3CQ09XhOdRIfrZwQ99HtrWi8EilQ/Z/xoGeK8yRGdbLizVYG Nopp3b68gO6FoC85iC9OiSGpYMaj0648atGvGWDJ6Qej8UzoFieXVNYuTEB/AdX1mA5K ukNGZ7b8vUtFwNCDjR2Io5mlbEewfxsKb7JP+kymT9jjjpU71t5aUYtcwAUO+et2HXhl i8+ydPVhbaMtxd47VizSpY+HiZoSsWa8tpb4XdMotyvFEG7TRF6IG36HoxUcjm36V4Zk KPX2q4jCajnTMmqhT+QdEj60sfweaNRFpEoesW7dj1QA8b+L4KdclTI+Any6M1Pc9iS6 XKEA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=iX41Gad2; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id t17-20020ac865d1000000b0031e9fbba5bfsi1011449qto.668.2022.10.05.20.25.04 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:25:04 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=iX41Gad2; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:36796 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHVQ-0002DU-5l for patch@linaro.org; Wed, 05 Oct 2022 23:25:04 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42018) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIU-00066v-55 for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:42 -0400 Received: from mail-pj1-x1036.google.com ([2607:f8b0:4864:20::1036]:43623) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIS-00070e-2x for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:41 -0400 Received: by mail-pj1-x1036.google.com with SMTP id g1-20020a17090a708100b00203c1c66ae3so525493pjk.2 for ; Wed, 05 Oct 2022 20:11:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=0dnjfsBXDy5rRQ39KsZnlcakGwGZWrxEz9jQMseFkII=; b=iX41Gad281pokn2zxnAmHJGFQdL8UfofTU0X5OKlMFYenav0ZakTVNMT6bBDhZTXyc 2HlfXXXjOh44ZorSKULegkAluyInhGd85ks48g3ibGkkFHbWK00gJJmmTUPWEmoYr7zq hBJYfWkROnhrEo7xdH/v0UYSn6eAM7vNR8jn9uL/exazzRX58SmrfYoCzcZvDN3AldxU D+3dqA77uQfArtepDJw/LWv0MAFIAs/F4UNgz5704K/l8IBL1TPbzezoF1XXGKim+4Vn 6zMOVrUrftAfUY51yBee4JDusdnqY3p2JZQyFl1q2m9LHTxgFJwZJNHKuGRpsAHVa+RC dfow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=0dnjfsBXDy5rRQ39KsZnlcakGwGZWrxEz9jQMseFkII=; b=QdwyNz723Va/HCxjB/JEyOBZsREUD/QdD1CikipiOl7XkLWpY0GW+GdYoWhiDvlREY JLOj78al6cVa5pIzQyrWcW/p7yg+jmBUyco7rD0v+v0K9o05wHaZkY5oXWupibmMZXAW 8hM46MaxukqGfB77EgzeBbamJCcn4C/khB7+jQ4V1vfAaAFfaEX02EjhgISaSqsBUwD5 KKgQ/yl/RuFZyeBOEj66+ps8NofXBd6SgJeMMEdae3oaISeiRWzA0pIT48tWdURoV3Jz j0a61Hm+07nPZmUmUvx4W3gxGJUr1s/5/dCH8oL7alj5TVOLMfEqaB4K+pT6UyhsGCWH 4Z8Q== X-Gm-Message-State: ACrzQf1moWleJe7JCD4BiQ93sqMY9LH+QCUhHcLHpBRstovZyVXKMkiy ISS7e3KR4T3eIxSbqd/sJExk7ieko44b1g== X-Received: by 2002:a17:902:da8f:b0:178:399b:89bb with SMTP id j15-20020a170902da8f00b00178399b89bbmr2539812plx.57.1665025898689; Wed, 05 Oct 2022 20:11:38 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:38 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 19/24] accel/tcg: Simplify page_get/alloc_target_data Date: Wed, 5 Oct 2022 20:11:08 -0700 Message-Id: <20221006031113.1139454-20-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::1036; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1036.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Since the only user, Arm MTE, always requires allocation, merge the get and alloc functions to always produce a non-null result. Also assume that the user has already checked page validity. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- include/exec/cpu-all.h | 21 ++++++--------------- accel/tcg/user-exec.c | 16 ++++------------ target/arm/mte_helper.c | 4 ---- 3 files changed, 10 insertions(+), 31 deletions(-) diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h index 854adc4ac2..2eb1176538 100644 --- a/include/exec/cpu-all.h +++ b/include/exec/cpu-all.h @@ -281,27 +281,18 @@ void page_reset_target_data(target_ulong start, target_ulong end); int page_check_range(target_ulong start, target_ulong len, int flags); /** - * page_alloc_target_data(address) + * page_get_target_data(address) * @address: guest virtual address * - * Allocate TARGET_PAGE_DATA_SIZE bytes of out-of-band data to associate - * with the guest page at @address. If the page is not mapped, NULL will - * be returned. If there is existing data associated with @address, - * no new memory will be allocated. + * Return TARGET_PAGE_DATA_SIZE bytes of out-of-band data to associate + * with the guest page at @address, allocating it if necessary. The + * caller should already have verified that the address is valid. * * The memory will be freed when the guest page is deallocated, * e.g. with the munmap system call. */ -void *page_alloc_target_data(target_ulong address); - -/** - * page_get_target_data(address) - * @address: guest virtual address - * - * Return any out-of-bound memory assocated with the guest page - * at @address, as per page_alloc_target_data. - */ -void *page_get_target_data(target_ulong address); +void *page_get_target_data(target_ulong address) + __attribute__((returns_nonnull)); #endif CPUArchState *cpu_copy(CPUArchState *env); diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index 927b91900f..fb7d6ee9e9 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -242,19 +242,11 @@ void page_reset_target_data(target_ulong start, target_ulong end) void *page_get_target_data(target_ulong address) { PageDesc *p = page_find(address >> TARGET_PAGE_BITS); - return p ? p->target_data : NULL; -} + void *ret = p->target_data; -void *page_alloc_target_data(target_ulong address) -{ - PageDesc *p = page_find(address >> TARGET_PAGE_BITS); - void *ret = NULL; - - if (p->flags & PAGE_VALID) { - ret = p->target_data; - if (!ret) { - p->target_data = ret = g_malloc0(TARGET_PAGE_DATA_SIZE); - } + if (!ret) { + ret = g_malloc0(TARGET_PAGE_DATA_SIZE); + p->target_data = ret; } return ret; } diff --git a/target/arm/mte_helper.c b/target/arm/mte_helper.c index 62d36c127f..d8eefabcd3 100644 --- a/target/arm/mte_helper.c +++ b/target/arm/mte_helper.c @@ -95,10 +95,6 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, } tags = page_get_target_data(clean_ptr); - if (tags == NULL) { - tags = page_alloc_target_data(clean_ptr); - assert(tags != NULL); - } index = extract32(ptr, LOG2_TAG_GRANULE + 1, TARGET_PAGE_BITS - LOG2_TAG_GRANULE - 1); From patchwork Thu Oct 6 03:11:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612858 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1195402pvb; Wed, 5 Oct 2022 20:28:08 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4s7TAmJ8Ek6S7dSn8WVa1WKpdWLBGOGQQU+JCHSTujXnDpMLdSQAypianA4yZ6qMqR6Np3 X-Received: by 2002:a37:b2c5:0:b0:6df:f8d6:6ea0 with SMTP id b188-20020a37b2c5000000b006dff8d66ea0mr1938711qkf.386.1665026888318; Wed, 05 Oct 2022 20:28:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026888; cv=none; d=google.com; s=arc-20160816; b=EXykpbCMUt0NJq/OZeo8Ht7jnJX1Ff96fNWFI+UwKD3qVCy8VdQg3bAPqAHfUoD9/J cn/KFZCJsKZziiKzFZDjwvwgRp60lp7Gwlnu+jQv/4KpJFt3AU0W+pCDzS1nLH1oQs7M 4O7ukTE04ON/LCisbtH/vlj//kSqjZCFMrv7A3aZoxiDAi5epUXraxkZgAHzn4BxzZOr /CNsPoLtcpMawhMR8NndV4fOadzCzRJhv27BT0k/GZhAHUH0dtDJZao+FVo3rigWam19 vQ8FnJRakIDYfdH0DWW9rU758gIdBco3AsqTSw3W27fF3JboyzMHyc/8bE95u1r8Tive 8TiA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=fMNzVDjZno1MDGZKBogmjYP5Xb4oEHZ4nbrRSgPIHno=; b=iAGdc+WnnHllivUBkH6JyWkPDbzkQcfLkdqQ01jcJ358ub9z0ufYFKNY0EBdcaKh51 dgk+ZHIKaxKVojvgm3Srey9+MoPAP16e0pedo1IzizfQqXWAWNWMTjsmVuBI4IBW6h2J S8+MfvoTxoHMfXq77F1Hp/aJX72FjeDFzSz0JYzXFFMqwE6csEae3/ulwt5XGUrnZz7T exvssfX/410+aMAQ7oDvWTNAqjlwmLHXaeKK9sGAOQYYHsT6NrF/iF0x6+G6/XdCaOxD mcwi1yMtycqw0udeq4NSYOA/Zyxw8pJbnZTPNv8x1J/vAnee6pfUxjSVOivrR/K7hIUr 4fOQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kMSfNngl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id r20-20020ac87ef4000000b0039230c9c5fbsi1091132qtc.209.2022.10.05.20.28.08 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:28:08 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=kMSfNngl; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:55570 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHYN-0006XK-Qn for patch@linaro.org; Wed, 05 Oct 2022 23:28:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:42022) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIW-0006Bb-2B for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:44 -0400 Received: from mail-pg1-x529.google.com ([2607:f8b0:4864:20::529]:37441) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIT-00070q-7e for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:43 -0400 Received: by mail-pg1-x529.google.com with SMTP id bh13so717596pgb.4 for ; Wed, 05 Oct 2022 20:11:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=fMNzVDjZno1MDGZKBogmjYP5Xb4oEHZ4nbrRSgPIHno=; b=kMSfNngl6T80Y7wxEjAeFMYD+mM3xMZc3HqpAxcCGe4D3LFDYdJtuGcONYsSxXSg33 sezHon3kU4r+wIXklSP8ABec5obpbVvsePk26b1S4CLMMlMci5tZ2bKFNr5xQa509j5f rN1xTyGA6dYIuIttXpsANPaNgbtmixDH+AzgjGGD/dDAzC/Usud3AAxkAThU6F/2w+6o kAuzqC1BPjK4m2bSGkyrehVmg1VA6j1/psZvpw9Qg6/FP7KdXexrWSdKSJLCwFZp8FV/ /hNA/TetP9cuIU8CImVkUCylJMZ0RHAyv+R64mxZ1Z6MAjq8OmML0TSwnhKrEoPR+syb TTbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=fMNzVDjZno1MDGZKBogmjYP5Xb4oEHZ4nbrRSgPIHno=; b=PNSpD1iPq4QB2V6RqXqmamifljttcUcWaVVQx2MnxAfye+ayppaWed9bVdj2iCr43z 2J5TX+5+FLb3Wnw8+QN2j1rSMoLwc5X0Ys2+7rCuYeC5Zq9O+eBYSeZeU4PYTZS+sNem x7d5+GBtmjy1E+9mpbqg/NHhEH3vueAYVixLfQOvZLzc5CFW1udkGHIZT/AuK27Y8KRZ Seg+XCUP06KjBkzlIXbwARwqWEIOIROfUZly4mLq3GBc0ud4qbNayqQoLcD5ox9ddcqj 05hAB8Wet06YsfIvi/Nom0PBmOQAxwKX+o77abnbgbbvwahEyikXN8QqfneGuGliIVbN 0Wnw== X-Gm-Message-State: ACrzQf1wWAiz/qiiVq5mYKvSSPYY6j8IGQYu0tJmuMJkWSS+onrMEZxO 5+39NhfqabjF6DueoeNz96qLgjdJB2ivHg== X-Received: by 2002:a05:6a00:a8c:b0:558:991a:6691 with SMTP id b12-20020a056a000a8c00b00558991a6691mr2584469pfl.53.1665025899866; Wed, 05 Oct 2022 20:11:39 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:39 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 20/24] accel/tcg: Use interval tree for TARGET_PAGE_DATA_SIZE Date: Wed, 5 Oct 2022 20:11:09 -0700 Message-Id: <20221006031113.1139454-21-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::529; envelope-from=richard.henderson@linaro.org; helo=mail-pg1-x529.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Continue weaning user-only away from PageDesc. Use an interval tree to record target data. Chunk the data, to minimize allocation overhead. Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 1 - accel/tcg/user-exec.c | 110 ++++++++++++++++++++++++++++++++---------- 2 files changed, 85 insertions(+), 26 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 1bd5a02911..8731dc52e2 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -26,7 +26,6 @@ typedef struct PageDesc { #ifdef CONFIG_USER_ONLY unsigned long flags; - void *target_data; #else QemuSpin lock; /* list of TBs intersecting this ram page */ diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index fb7d6ee9e9..bce3d5f335 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -210,47 +210,107 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr, return addr; } +#ifdef TARGET_PAGE_DATA_SIZE +/* + * Allocate chunks of target data together. For the only current user, + * if we allocate one hunk per page, we have overhead of 40/128 or 40%. + * Therefore, allocate memory for 64 pages at a time for overhead < 1%. + */ +#define TPD_PAGES 64 +#define TBD_MASK (TARGET_PAGE_MASK * TPD_PAGES) + +typedef struct TargetPageDataNode { + IntervalTreeNode itree; + char data[TPD_PAGES][TARGET_PAGE_DATA_SIZE] __attribute__((aligned)); +} TargetPageDataNode; + +static IntervalTreeRoot targetdata_root; + void page_reset_target_data(target_ulong start, target_ulong end) { -#ifdef TARGET_PAGE_DATA_SIZE - target_ulong addr, len; + IntervalTreeNode *n, *next; + target_ulong last; - /* - * This function should never be called with addresses outside the - * guest address space. If this assert fires, it probably indicates - * a missing call to h2g_valid. - */ - assert(end - 1 <= GUEST_ADDR_MAX); - assert(start < end); assert_memory_lock(); start = start & TARGET_PAGE_MASK; - end = TARGET_PAGE_ALIGN(end); + last = TARGET_PAGE_ALIGN(end) - 1; - for (addr = start, len = end - start; - len != 0; - len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { - PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, 1); + for (n = interval_tree_iter_first(&targetdata_root, start, last), + next = n ? interval_tree_iter_next(n, start, last) : NULL; + n != NULL; + n = next, + next = next ? interval_tree_iter_next(n, start, last) : NULL) { + target_ulong n_start, n_last, p_ofs, p_len; + TargetPageDataNode *t; - g_free(p->target_data); - p->target_data = NULL; + if (n->start >= start && n->last <= last) { + interval_tree_remove(n, &targetdata_root); + g_free(n); + continue; + } + + if (n->start < start) { + n_start = start; + p_ofs = (start - n->start) >> TARGET_PAGE_BITS; + } else { + n_start = n->start; + p_ofs = 0; + } + n_last = MIN(last, n->last); + p_len = (n_last + 1 - n_start) >> TARGET_PAGE_BITS; + + t = container_of(n, TargetPageDataNode, itree); + memset(t->data[p_ofs], 0, p_len * TARGET_PAGE_DATA_SIZE); } -#endif } -#ifdef TARGET_PAGE_DATA_SIZE void *page_get_target_data(target_ulong address) { - PageDesc *p = page_find(address >> TARGET_PAGE_BITS); - void *ret = p->target_data; + IntervalTreeNode *n; + TargetPageDataNode *t; + target_ulong page, region; + bool locked; - if (!ret) { - ret = g_malloc0(TARGET_PAGE_DATA_SIZE); - p->target_data = ret; + page = address & TARGET_PAGE_MASK; + region = address & TBD_MASK; + + n = interval_tree_iter_first(&targetdata_root, page, page); + if (n) { + goto found; } - return ret; + + /* + * See util/interval-tree.c re lockless lookups: no false positives but + * there are false negatives. If we find nothing, retry with the mmap + * lock acquired. We also need the lock for the allocation + insert. + */ + locked = have_mmap_lock(); + if (!locked) { + mmap_lock(); + n = interval_tree_iter_first(&targetdata_root, page, page); + if (n) { + mmap_unlock(); + goto found; + } + } + + t = g_new0(TargetPageDataNode, 1); + n = &t->itree; + n->start = region; + n->last = region | ~TBD_MASK; + interval_tree_insert(n, &targetdata_root); + if (!locked) { + mmap_unlock(); + } + + found: + t = container_of(n, TargetPageDataNode, itree); + return t->data[(page - region) >> TARGET_PAGE_BITS]; } -#endif +#else +void page_reset_target_data(target_ulong start, target_ulong end) { } +#endif /* TARGET_PAGE_DATA_SIZE */ /* The softmmu versions of these helpers are in cputlb.c. */ From patchwork Thu Oct 6 03:11:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612861 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1197000pvb; Wed, 5 Oct 2022 20:32:08 -0700 (PDT) X-Google-Smtp-Source: AMsMyM451HECJhGluALObYnW84X2suptjwhRq7qPiQL+nrbULwDSAInCPMBSmLFVe5Ml4O9cB8c9 X-Received: by 2002:ac8:5e0a:0:b0:35c:e8d8:6c19 with SMTP id h10-20020ac85e0a000000b0035ce8d86c19mr2101853qtx.178.1665027127890; Wed, 05 Oct 2022 20:32:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665027127; cv=none; d=google.com; s=arc-20160816; b=GsbEJ3wSUL7BQ3XU3Kn7s7MgFGRQ2ksu/OQgZl6/1cYuCmCI4+bRDbvYC1ShFvEb8H vDnVjnip9Mb+oyFiHl4fDRfg6B4jxNOu2H8yCOSWMSF966B4u4MAxadRsA8WHBcOdyQz ZalX3orgn2yDJa3ZBc/EZlFdwxjHZx7wl0meVLJwVFNvpIEvDxzbQu6hRA4YvZHjc8zD uzy1Oi+mo+FR8zGccw1UF/TNuUik6Nu6fKGgwsoloJ7RNpeLjy9XCOYlURjB9U5oNXjh yoTxJcseEOA50oksKemAJEk1nluvdQItsta99qbWI/FUTaYgj4bRgoHcaxVWniMkhVnI CxQQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=d4u1KkM8HsjsZDYsSReFvti4Odehgt2IXddqwO3MXFY=; b=Ek+X7JnmhbLQPBxsyPM78c4j07aegL0pl6IJ9788scl8a4DFLE/TnWkZlE/63Wkq3v PNQAdRxSzjY9Zk6ddjhMzvU/BuN5OKwhwmPOroqPhdYIy4QBgNvlbd3morf3yt/CpMZW tLlIiI+SPtMWxHauF/0E/NdQNpoKaBMdETOtw/Fh8upmkNDFdZitXrxwb62fTBERwEDt M771A9a5EAjhCS/5pMMmyBRob21s2aR6bINxwtfPSkSUXAEp+SGOEMJWZkRIfmfm1lX4 DxHWeBXspWdQjz//mWRlsM8ULZjehg7k67bTaqmLsaPS7EpRSXp5z5rHxR+mImKuJxca m3xw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=uWbO4nCo; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id ea11-20020a05620a488b00b006e443d40d67si822561qkb.328.2022.10.05.20.32.07 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:32:07 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=uWbO4nCo; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:38412 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHcF-0002kr-BO for patch@linaro.org; Wed, 05 Oct 2022 23:32:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52862) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIZ-0006Js-2u for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:47 -0400 Received: from mail-pj1-x1032.google.com ([2607:f8b0:4864:20::1032]:45965) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIU-000715-EB for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:46 -0400 Received: by mail-pj1-x1032.google.com with SMTP id o9-20020a17090a0a0900b0020ad4e758b3so488483pjo.4 for ; Wed, 05 Oct 2022 20:11:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=d4u1KkM8HsjsZDYsSReFvti4Odehgt2IXddqwO3MXFY=; b=uWbO4nCoukcmHj7lWEt2obhB9CrQm2oYPBX4PoI+6Yq5GygXy9JqH2itevDWA1mBDV ZBeOl94HarRJ5uOQLZyD7OpQ2yExYDk4MOOYSa+YmCE/5JREZaMOmyAIS7BdhrHxxeNx ayBRlmhu5deAkIoxPfA8LMVy57RidAbXhrVa0vE0ojcph8rthiu18eHQZsc/IkBxIYUV HmqJdbcJ6c8A7PryGzti2F3yZtoOa5TxKxouTcNTpKgurwrD+xG3OuHUGTxuUvObugKK xu2Zl8rCStlCWKV/wrGvhyVC6NZZblNsCNp2b35GsbwdIVADnHaNH1c2cnnc8xNhutDy f/nQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=d4u1KkM8HsjsZDYsSReFvti4Odehgt2IXddqwO3MXFY=; b=7aJx7cmiI3bZKuUhS4Rp9F+xogVOLEVfM5GVYrXAStD7LuXw0vRhpLgCljv/UjPovT BSMgdVOehmWbZjlGYwv0h1V9tKM7s2QCqgS+iunYdhqtKLPmhxCVrSS9raWvz1OIPGP+ 2HoFnYL0U/4MouCsFIgKRkV9Hxj1jdbe50NJhD8rcpG1tX9cIaVpZgGvFSJesEaqvMKI hcygb1LUSYl5++0mCgNYFHPRmqHkOQNILtw8CFQb5zqAxfWn769OH2vEhFVlucIXk7b9 AP3L8hR7ZUhns6tgk2gTUxvZ35dNvVMw2Lm7nybk1Audc2hn8xZNo0nnilkOqIz7tuR/ JEgQ== X-Gm-Message-State: ACrzQf3LhXWQA5uRgJUxLxMIpwGfKIKmkL+aA3oeIxcHkjnJthqS27F1 VMcvFzVDtdpVwhC/FqLDJBrFq7JxnMOzTg== X-Received: by 2002:a17:90b:33c5:b0:202:fa60:3769 with SMTP id lk5-20020a17090b33c500b00202fa603769mr2913624pjb.60.1665025900974; Wed, 05 Oct 2022 20:11:40 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:40 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 21/24] accel/tcg: Move page_{get,set}_flags to user-exec.c Date: Wed, 5 Oct 2022 20:11:10 -0700 Message-Id: <20221006031113.1139454-22-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::1032; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x1032.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" This page tracking implementation is specific to user-only, since the system softmmu version is in cputlb.c. Move it out of translate-all.c to user-exec.c. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/internal.h | 17 ++ accel/tcg/translate-all.c | 350 -------------------------------------- accel/tcg/user-exec.c | 346 +++++++++++++++++++++++++++++++++++++ 3 files changed, 363 insertions(+), 350 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 8731dc52e2..250f0daac9 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -33,6 +33,23 @@ typedef struct PageDesc { #endif } PageDesc; +/* + * In system mode we want L1_MAP to be based on ram offsets, + * while in user mode we want it to be based on virtual addresses. + * + * TODO: For user mode, see the caveat re host vs guest virtual + * address spaces near GUEST_ADDR_MAX. + */ +#if !defined(CONFIG_USER_ONLY) +#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS +# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS +#else +# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS +#endif +#else +# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) +#endif + /* Size of the L2 (and L3, etc) page tables. */ #define V_L2_BITS 10 #define V_L2_SIZE (1 << V_L2_BITS) diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index dbd4eff0cf..65e557592a 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -109,23 +109,6 @@ struct page_collection { struct page_entry *max; }; -/* - * In system mode we want L1_MAP to be based on ram offsets, - * while in user mode we want it to be based on virtual addresses. - * - * TODO: For user mode, see the caveat re host vs guest virtual - * address spaces near GUEST_ADDR_MAX. - */ -#if !defined(CONFIG_USER_ONLY) -#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS -# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS -#else -# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS -#endif -#else -# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) -#endif - /* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) @@ -1214,339 +1197,6 @@ void cpu_interrupt(CPUState *cpu, int mask) qatomic_set(&cpu_neg(cpu)->icount_decr.u16.high, -1); } -/* - * Walks guest process memory "regions" one by one - * and calls callback function 'fn' for each region. - */ -struct walk_memory_regions_data { - walk_memory_regions_fn fn; - void *priv; - target_ulong start; - int prot; -}; - -static int walk_memory_regions_end(struct walk_memory_regions_data *data, - target_ulong end, int new_prot) -{ - if (data->start != -1u) { - int rc = data->fn(data->priv, data->start, end, data->prot); - if (rc != 0) { - return rc; - } - } - - data->start = (new_prot ? end : -1u); - data->prot = new_prot; - - return 0; -} - -static int walk_memory_regions_1(struct walk_memory_regions_data *data, - target_ulong base, int level, void **lp) -{ - target_ulong pa; - int i, rc; - - if (*lp == NULL) { - return walk_memory_regions_end(data, base, 0); - } - - if (level == 0) { - PageDesc *pd = *lp; - - for (i = 0; i < V_L2_SIZE; ++i) { - int prot = pd[i].flags; - - pa = base | (i << TARGET_PAGE_BITS); - if (prot != data->prot) { - rc = walk_memory_regions_end(data, pa, prot); - if (rc != 0) { - return rc; - } - } - } - } else { - void **pp = *lp; - - for (i = 0; i < V_L2_SIZE; ++i) { - pa = base | ((target_ulong)i << - (TARGET_PAGE_BITS + V_L2_BITS * level)); - rc = walk_memory_regions_1(data, pa, level - 1, pp + i); - if (rc != 0) { - return rc; - } - } - } - - return 0; -} - -int walk_memory_regions(void *priv, walk_memory_regions_fn fn) -{ - struct walk_memory_regions_data data; - uintptr_t i, l1_sz = v_l1_size; - - data.fn = fn; - data.priv = priv; - data.start = -1u; - data.prot = 0; - - for (i = 0; i < l1_sz; i++) { - target_ulong base = i << (v_l1_shift + TARGET_PAGE_BITS); - int rc = walk_memory_regions_1(&data, base, v_l2_levels, l1_map + i); - if (rc != 0) { - return rc; - } - } - - return walk_memory_regions_end(&data, 0, 0); -} - -static int dump_region(void *priv, target_ulong start, - target_ulong end, unsigned long prot) -{ - FILE *f = (FILE *)priv; - - (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx - " "TARGET_FMT_lx" %c%c%c\n", - start, end, end - start, - ((prot & PAGE_READ) ? 'r' : '-'), - ((prot & PAGE_WRITE) ? 'w' : '-'), - ((prot & PAGE_EXEC) ? 'x' : '-')); - - return 0; -} - -/* dump memory mappings */ -void page_dump(FILE *f) -{ - const int length = sizeof(target_ulong) * 2; - (void) fprintf(f, "%-*s %-*s %-*s %s\n", - length, "start", length, "end", length, "size", "prot"); - walk_memory_regions(f, dump_region); -} - -int page_get_flags(target_ulong address) -{ - PageDesc *p; - - p = page_find(address >> TARGET_PAGE_BITS); - if (!p) { - return 0; - } - return p->flags; -} - -/* - * Allow the target to decide if PAGE_TARGET_[12] may be reset. - * By default, they are not kept. - */ -#ifndef PAGE_TARGET_STICKY -#define PAGE_TARGET_STICKY 0 -#endif -#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) - -/* Modify the flags of a page and invalidate the code if necessary. - The flag PAGE_WRITE_ORG is positioned automatically depending - on PAGE_WRITE. The mmap_lock should already be held. */ -void page_set_flags(target_ulong start, target_ulong end, int flags) -{ - target_ulong addr, len; - bool reset, inval_tb = false; - - /* This function should never be called with addresses outside the - guest address space. If this assert fires, it probably indicates - a missing call to h2g_valid. */ - assert(end - 1 <= GUEST_ADDR_MAX); - assert(start < end); - /* Only set PAGE_ANON with new mappings. */ - assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); - assert_memory_lock(); - - start = start & TARGET_PAGE_MASK; - end = TARGET_PAGE_ALIGN(end); - - if (flags & PAGE_WRITE) { - flags |= PAGE_WRITE_ORG; - } - reset = !(flags & PAGE_VALID) || (flags & PAGE_RESET); - if (reset) { - page_reset_target_data(start, end); - } - flags &= ~PAGE_RESET; - - for (addr = start, len = end - start; - len != 0; - len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { - PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, true); - - /* - * If the page was executable, but is reset, or is no longer - * executable, or has become writable, then invalidate any code. - */ - if ((p->flags & PAGE_EXEC) - && (reset || - !(flags & PAGE_EXEC) || - (flags & ~p->flags & PAGE_WRITE))) { - inval_tb = true; - } - /* Using mprotect on a page does not change sticky bits. */ - p->flags = (reset ? 0 : p->flags & PAGE_STICKY) | flags; - } - - if (inval_tb) { - tb_invalidate_phys_range(start, end); - } -} - -int page_check_range(target_ulong start, target_ulong len, int flags) -{ - PageDesc *p; - target_ulong end; - target_ulong addr; - - /* This function should never be called with addresses outside the - guest address space. If this assert fires, it probably indicates - a missing call to h2g_valid. */ - if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { - assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); - } - - if (len == 0) { - return 0; - } - if (start + len - 1 < start) { - /* We've wrapped around. */ - return -1; - } - - /* must do before we loose bits in the next step */ - end = TARGET_PAGE_ALIGN(start + len); - start = start & TARGET_PAGE_MASK; - - for (addr = start, len = end - start; - len != 0; - len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { - p = page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - return -1; - } - if (!(p->flags & PAGE_VALID)) { - return -1; - } - - if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { - return -1; - } - if (flags & PAGE_WRITE) { - if (!(p->flags & PAGE_WRITE_ORG)) { - return -1; - } - /* unprotect the page if it was put read-only because it - contains translated code */ - if (!(p->flags & PAGE_WRITE)) { - if (!page_unprotect(addr, 0)) { - return -1; - } - } - } - } - return 0; -} - -void page_protect(tb_page_addr_t page_addr) -{ - target_ulong addr; - PageDesc *p; - int prot; - - p = page_find(page_addr >> TARGET_PAGE_BITS); - if (p && (p->flags & PAGE_WRITE)) { - /* - * Force the host page as non writable (writes will have a page fault + - * mprotect overhead). - */ - page_addr &= qemu_host_page_mask; - prot = 0; - for (addr = page_addr; addr < page_addr + qemu_host_page_size; - addr += TARGET_PAGE_SIZE) { - - p = page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - continue; - } - prot |= p->flags; - p->flags &= ~PAGE_WRITE; - } - mprotect(g2h_untagged(page_addr), qemu_host_page_size, - (prot & PAGE_BITS) & ~PAGE_WRITE); - } -} - -/* called from signal handler: invalidate the code and unprotect the - * page. Return 0 if the fault was not handled, 1 if it was handled, - * and 2 if it was handled but the caller must cause the TB to be - * immediately exited. (We can only return 2 if the 'pc' argument is - * non-zero.) - */ -int page_unprotect(target_ulong address, uintptr_t pc) -{ - unsigned int prot; - bool current_tb_invalidated; - PageDesc *p; - target_ulong host_start, host_end, addr; - - /* Technically this isn't safe inside a signal handler. However we - know this only ever happens in a synchronous SEGV handler, so in - practice it seems to be ok. */ - mmap_lock(); - - p = page_find(address >> TARGET_PAGE_BITS); - if (!p) { - mmap_unlock(); - return 0; - } - - /* if the page was really writable, then we change its - protection back to writable */ - if (p->flags & PAGE_WRITE_ORG) { - current_tb_invalidated = false; - if (p->flags & PAGE_WRITE) { - /* If the page is actually marked WRITE then assume this is because - * this thread raced with another one which got here first and - * set the page to PAGE_WRITE and did the TB invalidate for us. - */ -#ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb = tcg_tb_lookup(pc); - if (current_tb) { - current_tb_invalidated = tb_cflags(current_tb) & CF_INVALID; - } -#endif - } else { - host_start = address & qemu_host_page_mask; - host_end = host_start + qemu_host_page_size; - - prot = 0; - for (addr = host_start; addr < host_end; addr += TARGET_PAGE_SIZE) { - p = page_find(addr >> TARGET_PAGE_BITS); - p->flags |= PAGE_WRITE; - prot |= p->flags; - - /* and since the content will be modified, we must invalidate - the corresponding translated code. */ - current_tb_invalidated |= - tb_invalidate_phys_page_unwind(addr, pc); - } - mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, - prot & PAGE_BITS); - } - mmap_unlock(); - /* If current TB was invalidated return to main loop */ - return current_tb_invalidated ? 2 : 1; - } - mmap_unlock(); - return 0; -} #endif /* CONFIG_USER_ONLY */ /* diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index bce3d5f335..b6050e2bfe 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -135,6 +135,352 @@ bool handle_sigsegv_accerr_write(CPUState *cpu, sigset_t *old_set, } } +/* + * Walks guest process memory "regions" one by one + * and calls callback function 'fn' for each region. + */ +struct walk_memory_regions_data { + walk_memory_regions_fn fn; + void *priv; + target_ulong start; + int prot; +}; + +static int walk_memory_regions_end(struct walk_memory_regions_data *data, + target_ulong end, int new_prot) +{ + if (data->start != -1u) { + int rc = data->fn(data->priv, data->start, end, data->prot); + if (rc != 0) { + return rc; + } + } + + data->start = (new_prot ? end : -1u); + data->prot = new_prot; + + return 0; +} + +static int walk_memory_regions_1(struct walk_memory_regions_data *data, + target_ulong base, int level, void **lp) +{ + target_ulong pa; + int i, rc; + + if (*lp == NULL) { + return walk_memory_regions_end(data, base, 0); + } + + if (level == 0) { + PageDesc *pd = *lp; + + for (i = 0; i < V_L2_SIZE; ++i) { + int prot = pd[i].flags; + + pa = base | (i << TARGET_PAGE_BITS); + if (prot != data->prot) { + rc = walk_memory_regions_end(data, pa, prot); + if (rc != 0) { + return rc; + } + } + } + } else { + void **pp = *lp; + + for (i = 0; i < V_L2_SIZE; ++i) { + pa = base | ((target_ulong)i << + (TARGET_PAGE_BITS + V_L2_BITS * level)); + rc = walk_memory_regions_1(data, pa, level - 1, pp + i); + if (rc != 0) { + return rc; + } + } + } + + return 0; +} + +int walk_memory_regions(void *priv, walk_memory_regions_fn fn) +{ + struct walk_memory_regions_data data; + uintptr_t i, l1_sz = v_l1_size; + + data.fn = fn; + data.priv = priv; + data.start = -1u; + data.prot = 0; + + for (i = 0; i < l1_sz; i++) { + target_ulong base = i << (v_l1_shift + TARGET_PAGE_BITS); + int rc = walk_memory_regions_1(&data, base, v_l2_levels, l1_map + i); + if (rc != 0) { + return rc; + } + } + + return walk_memory_regions_end(&data, 0, 0); +} + +static int dump_region(void *priv, target_ulong start, + target_ulong end, unsigned long prot) +{ + FILE *f = (FILE *)priv; + + (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx + " "TARGET_FMT_lx" %c%c%c\n", + start, end, end - start, + ((prot & PAGE_READ) ? 'r' : '-'), + ((prot & PAGE_WRITE) ? 'w' : '-'), + ((prot & PAGE_EXEC) ? 'x' : '-')); + + return 0; +} + +/* dump memory mappings */ +void page_dump(FILE *f) +{ + const int length = sizeof(target_ulong) * 2; + (void) fprintf(f, "%-*s %-*s %-*s %s\n", + length, "start", length, "end", length, "size", "prot"); + walk_memory_regions(f, dump_region); +} + +int page_get_flags(target_ulong address) +{ + PageDesc *p; + + p = page_find(address >> TARGET_PAGE_BITS); + if (!p) { + return 0; + } + return p->flags; +} + +/* + * Allow the target to decide if PAGE_TARGET_[12] may be reset. + * By default, they are not kept. + */ +#ifndef PAGE_TARGET_STICKY +#define PAGE_TARGET_STICKY 0 +#endif +#define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) + +/* + * Modify the flags of a page and invalidate the code if necessary. + * The flag PAGE_WRITE_ORG is positioned automatically depending + * on PAGE_WRITE. The mmap_lock should already be held. + */ +void page_set_flags(target_ulong start, target_ulong end, int flags) +{ + target_ulong addr, len; + bool reset, inval_tb = false; + + /* This function should never be called with addresses outside the + guest address space. If this assert fires, it probably indicates + a missing call to h2g_valid. */ + assert(end - 1 <= GUEST_ADDR_MAX); + assert(start < end); + /* Only set PAGE_ANON with new mappings. */ + assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); + assert_memory_lock(); + + start = start & TARGET_PAGE_MASK; + end = TARGET_PAGE_ALIGN(end); + + if (flags & PAGE_WRITE) { + flags |= PAGE_WRITE_ORG; + } + reset = !(flags & PAGE_VALID) || (flags & PAGE_RESET); + if (reset) { + page_reset_target_data(start, end); + } + flags &= ~PAGE_RESET; + + for (addr = start, len = end - start; + len != 0; + len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { + PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, true); + + /* + * If the page was executable, but is reset, or is no longer + * executable, or has become writable, then invalidate any code. + */ + if ((p->flags & PAGE_EXEC) + && (reset || + !(flags & PAGE_EXEC) || + (flags & ~p->flags & PAGE_WRITE))) { + inval_tb = true; + } + /* Using mprotect on a page does not change sticky bits. */ + p->flags = (reset ? 0 : p->flags & PAGE_STICKY) | flags; + } + + if (inval_tb) { + tb_invalidate_phys_range(start, end); + } +} + +int page_check_range(target_ulong start, target_ulong len, int flags) +{ + PageDesc *p; + target_ulong end; + target_ulong addr; + + /* + * This function should never be called with addresses outside the + * guest address space. If this assert fires, it probably indicates + * a missing call to h2g_valid. + */ + if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { + assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); + } + + if (len == 0) { + return 0; + } + if (start + len - 1 < start) { + /* We've wrapped around. */ + return -1; + } + + /* must do before we loose bits in the next step */ + end = TARGET_PAGE_ALIGN(start + len); + start = start & TARGET_PAGE_MASK; + + for (addr = start, len = end - start; + len != 0; + len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { + p = page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + return -1; + } + if (!(p->flags & PAGE_VALID)) { + return -1; + } + + if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { + return -1; + } + if (flags & PAGE_WRITE) { + if (!(p->flags & PAGE_WRITE_ORG)) { + return -1; + } + /* unprotect the page if it was put read-only because it + contains translated code */ + if (!(p->flags & PAGE_WRITE)) { + if (!page_unprotect(addr, 0)) { + return -1; + } + } + } + } + return 0; +} + +void page_protect(tb_page_addr_t page_addr) +{ + target_ulong addr; + PageDesc *p; + int prot; + + p = page_find(page_addr >> TARGET_PAGE_BITS); + if (p && (p->flags & PAGE_WRITE)) { + /* + * Force the host page as non writable (writes will have a page fault + + * mprotect overhead). + */ + page_addr &= qemu_host_page_mask; + prot = 0; + for (addr = page_addr; addr < page_addr + qemu_host_page_size; + addr += TARGET_PAGE_SIZE) { + + p = page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + continue; + } + prot |= p->flags; + p->flags &= ~PAGE_WRITE; + } + mprotect(g2h_untagged(page_addr), qemu_host_page_size, + (prot & PAGE_BITS) & ~PAGE_WRITE); + } +} + +/* + * Called from signal handler: invalidate the code and unprotect the + * page. Return 0 if the fault was not handled, 1 if it was handled, + * and 2 if it was handled but the caller must cause the TB to be + * immediately exited. (We can only return 2 if the 'pc' argument is + * non-zero.) + */ +int page_unprotect(target_ulong address, uintptr_t pc) +{ + unsigned int prot; + bool current_tb_invalidated; + PageDesc *p; + target_ulong host_start, host_end, addr; + + /* + * Technically this isn't safe inside a signal handler. However we + * know this only ever happens in a synchronous SEGV handler, so in + * practice it seems to be ok. + */ + mmap_lock(); + + p = page_find(address >> TARGET_PAGE_BITS); + if (!p) { + mmap_unlock(); + return 0; + } + + /* + * If the page was really writable, then we change its + * protection back to writable. + */ + if (p->flags & PAGE_WRITE_ORG) { + current_tb_invalidated = false; + if (p->flags & PAGE_WRITE) { + /* + * If the page is actually marked WRITE then assume this is because + * this thread raced with another one which got here first and + * set the page to PAGE_WRITE and did the TB invalidate for us. + */ +#ifdef TARGET_HAS_PRECISE_SMC + TranslationBlock *current_tb = tcg_tb_lookup(pc); + if (current_tb) { + current_tb_invalidated = tb_cflags(current_tb) & CF_INVALID; + } +#endif + } else { + host_start = address & qemu_host_page_mask; + host_end = host_start + qemu_host_page_size; + + prot = 0; + for (addr = host_start; addr < host_end; addr += TARGET_PAGE_SIZE) { + p = page_find(addr >> TARGET_PAGE_BITS); + p->flags |= PAGE_WRITE; + prot |= p->flags; + + /* + * Since the content will be modified, we must invalidate + * the corresponding translated code. + */ + current_tb_invalidated |= + tb_invalidate_phys_page_unwind(addr, pc); + } + mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, + prot & PAGE_BITS); + } + mmap_unlock(); + /* If current TB was invalidated return to main loop */ + return current_tb_invalidated ? 2 : 1; + } + mmap_unlock(); + return 0; +} + static int probe_access_internal(CPUArchState *env, target_ulong addr, int fault_size, MMUAccessType access_type, bool nonfault, uintptr_t ra) From patchwork Thu Oct 6 03:11:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612857 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1195394pvb; Wed, 5 Oct 2022 20:28:07 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6at/O2/ybfokh1LxjHg9O2a3jbUqmxfGxn32duUM30ifBK9jdhRbWDrtczOzFoyseXEqjK X-Received: by 2002:ac8:5d86:0:b0:391:e2fc:9b72 with SMTP id d6-20020ac85d86000000b00391e2fc9b72mr2025884qtx.324.1665026887305; Wed, 05 Oct 2022 20:28:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665026887; cv=none; d=google.com; s=arc-20160816; b=KDnXih87lYTj5C7KxwzB9w9mDGOSUdd7TdtqEPO1gjStuOemGP45nq/MGSjqywzjf7 ajUHI7AGtoljqHv2bV98oEz317jQUxcwL8XOY3nJPQOLW2ypjvrdlmQjZ92nwlSvDxAt L6hsZuh7Fbl/y7E21UamBmmONnF3MR5KV7fMtVzZT5trpybylvHDa8S4lc78Q610iuYK /VTBUPqCUnCqtR0o5mdHy5JrVlLmUdFeA6vCdtWxETgfEi8Rsxr8+9CVbw476Rz+5Hms TBAfPV5XkqTQYcfAhNB52IzH3G01n4/CdH8AIxdrhKOtVOOWWYGlfxXpvkpTKa0xbJqF Y4Zg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=N/9lafWzfQqL6g4rB+QWqqgpDqQLDq6AjNiRH51vrg0=; b=go4rwpj6uXTrAvWMrRQIVaZUyxqEdJH96IaeXHTcBZLV7eHSWIwBw7IcpP/anGu4EL 7kJZsDeIkWOOESPvnSIHKMtPVf60lAmKJ76EQlhUr2JSrUEAdpJsASeZqMhHD6wFTOkl umpyyoK139PDZWRnHxZh3x3ZK/AKlYrcFexGGxdI0ysEXPy7TEJq3vLH8dJ6XDOvUrLa KB2TFseIMu2E3S3FAtYvLR1S4BpyuzEwyvC6KZiq304BSY2luPT90uYhpFZaioMfbQrm FlhFAQd0dH7LqdwMLcrcJlV/vea2X9usLFh9JdZRkc8RENCpkRmxFahSBJt9YjOIa8We k6oQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=qRmOogJ1; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id ff13-20020a05622a4d8d00b0035ccd12f394si6821869qtb.70.2022.10.05.20.28.07 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:28:07 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=qRmOogJ1; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:55568 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHYM-0006VL-Qj for patch@linaro.org; Wed, 05 Oct 2022 23:28:06 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52858) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIZ-0006Jp-2O for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:47 -0400 Received: from mail-pl1-x62c.google.com ([2607:f8b0:4864:20::62c]:37501) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIV-0006xB-7Q for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:46 -0400 Received: by mail-pl1-x62c.google.com with SMTP id d24so545580pls.4 for ; Wed, 05 Oct 2022 20:11:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=N/9lafWzfQqL6g4rB+QWqqgpDqQLDq6AjNiRH51vrg0=; b=qRmOogJ1MAtnzMtiDO4ffYUzpzHk+0pUxkVgr/BwFQ+2ci83hvAAU3FHb478l1CrcQ nl06WM4m4eto5TnSiKSqrjKO7tM4W9m5jL0wVDIrGiZsFc6Fb1XYbT/mDrOZDoCNX29M KTM56YonTK1oXGTTVIzZqH++V1W2xPd0bkvVULRSykxzgGdidEF4XWqmR6hypQ7fk1cf WDZ7WL1doWSqawnoxtMXj58dMKOCemayMzMHDd95n1eCYTsqLfjLWdQ9ZvXqvvEj6WNx XQhDjEyF+Ys1Jeg3EV2c+A58O8+hMQdx9M1npVncR8huCsa+fiKOx9WyfhjGKil+4H/3 E02Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=N/9lafWzfQqL6g4rB+QWqqgpDqQLDq6AjNiRH51vrg0=; b=d2oDW64renUiwpajD27wNR0GrxKN2vPhLM62iU8W9qev/pNbWbAO+BQWOXY6ywbnL9 IzHWcP9bReqp8WicIHStgN9fV6ximPnpcRmzM7Y2EYNFqqfUEwiDSdNeoMpwnJZrkNkG zMqgoAvfCWPsInmd1einOqgYWdEPyQ+Yos6MDZHgyfZwiLKvbdxaNCSw1VxToQGC5T/V a6eDEfe6kAdyFtpa0VZjpibEKhEpy2A3Zg95AsTJ57XjKRdUiBQakEHTaUuDmluYNM1K rtEgQBBApgBXwvip5Dvq5gJqU+GSowxxHas+4bso6KjU19QrofkL+FaEreIXy+Jkd0LU 7DLg== X-Gm-Message-State: ACrzQf1asB6e2urmSe80Wl8WHtqQqL28O5vHKT7pxw7OZSLaUb90Xtg3 sRAULHu4BSZh548aJa/MSuiIBjff9cleAg== X-Received: by 2002:a17:902:f550:b0:178:5b6a:3a1c with SMTP id h16-20020a170902f55000b001785b6a3a1cmr2651682plf.36.1665025902192; Wed, 05 Oct 2022 20:11:42 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:41 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 22/24] accel/tcg: Use interval tree for user-only page tracking Date: Wed, 5 Oct 2022 20:11:11 -0700 Message-Id: <20221006031113.1139454-23-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62c; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Finish weaning user-only away from PageDesc. Using an interval tree to track page permissions means that we can represent very large regions efficiently. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/290 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/967 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1214 Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 4 +- accel/tcg/tb-maint.c | 20 +- accel/tcg/user-exec.c | 614 ++++++++++++++++++++++----------- tests/tcg/multiarch/test-vma.c | 22 ++ 4 files changed, 451 insertions(+), 209 deletions(-) create mode 100644 tests/tcg/multiarch/test-vma.c diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index 250f0daac9..c7e157d1cd 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -24,9 +24,7 @@ #endif typedef struct PageDesc { -#ifdef CONFIG_USER_ONLY - unsigned long flags; -#else +#ifndef CONFIG_USER_ONLY QemuSpin lock; /* list of TBs intersecting this ram page */ uintptr_t first_tb; diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 14e8e47a6a..694440cb4a 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -68,15 +68,23 @@ static void page_flush_tb(void) /* Call with mmap_lock held. */ static void tb_page_add(TranslationBlock *tb, PageDesc *p1, PageDesc *p2) { - /* translator_loop() must have made all TB pages non-writable */ - assert(!(p1->flags & PAGE_WRITE)); - if (p2) { - assert(!(p2->flags & PAGE_WRITE)); - } + target_ulong addr; + int flags; assert_memory_lock(); - tb->itree.last = tb->itree.start + tb->size - 1; + + /* translator_loop() must have made all TB pages non-writable */ + addr = tb_page_addr0(tb); + flags = page_get_flags(addr); + assert(!(flags & PAGE_WRITE)); + + addr = tb_page_addr1(tb); + if (addr != -1) { + flags = page_get_flags(addr); + assert(!(flags & PAGE_WRITE)); + } + interval_tree_insert(&tb->itree, &tb_root); } diff --git a/accel/tcg/user-exec.c b/accel/tcg/user-exec.c index b6050e2bfe..3b47a9e26a 100644 --- a/accel/tcg/user-exec.c +++ b/accel/tcg/user-exec.c @@ -135,106 +135,61 @@ bool handle_sigsegv_accerr_write(CPUState *cpu, sigset_t *old_set, } } -/* - * Walks guest process memory "regions" one by one - * and calls callback function 'fn' for each region. - */ -struct walk_memory_regions_data { - walk_memory_regions_fn fn; - void *priv; - target_ulong start; - int prot; -}; +typedef struct PageFlagsNode { + IntervalTreeNode itree; + int flags; +} PageFlagsNode; -static int walk_memory_regions_end(struct walk_memory_regions_data *data, - target_ulong end, int new_prot) +static IntervalTreeRoot pageflags_root; + +static PageFlagsNode *pageflags_find(target_ulong start, target_long last) { - if (data->start != -1u) { - int rc = data->fn(data->priv, data->start, end, data->prot); - if (rc != 0) { - return rc; - } - } + IntervalTreeNode *n; - data->start = (new_prot ? end : -1u); - data->prot = new_prot; - - return 0; + n = interval_tree_iter_first(&pageflags_root, start, last); + return n ? container_of(n, PageFlagsNode, itree) : NULL; } -static int walk_memory_regions_1(struct walk_memory_regions_data *data, - target_ulong base, int level, void **lp) +static PageFlagsNode *pageflags_next(PageFlagsNode *p, target_ulong start, + target_long last) { - target_ulong pa; - int i, rc; + IntervalTreeNode *n; - if (*lp == NULL) { - return walk_memory_regions_end(data, base, 0); - } - - if (level == 0) { - PageDesc *pd = *lp; - - for (i = 0; i < V_L2_SIZE; ++i) { - int prot = pd[i].flags; - - pa = base | (i << TARGET_PAGE_BITS); - if (prot != data->prot) { - rc = walk_memory_regions_end(data, pa, prot); - if (rc != 0) { - return rc; - } - } - } - } else { - void **pp = *lp; - - for (i = 0; i < V_L2_SIZE; ++i) { - pa = base | ((target_ulong)i << - (TARGET_PAGE_BITS + V_L2_BITS * level)); - rc = walk_memory_regions_1(data, pa, level - 1, pp + i); - if (rc != 0) { - return rc; - } - } - } - - return 0; + n = interval_tree_iter_next(&p->itree, start, last); + return n ? container_of(n, PageFlagsNode, itree) : NULL; } int walk_memory_regions(void *priv, walk_memory_regions_fn fn) { - struct walk_memory_regions_data data; - uintptr_t i, l1_sz = v_l1_size; + IntervalTreeNode *n; + int rc = 0; - data.fn = fn; - data.priv = priv; - data.start = -1u; - data.prot = 0; + mmap_lock(); + for (n = interval_tree_iter_first(&pageflags_root, 0, -1); + n != NULL; + n = interval_tree_iter_next(n, 0, -1)) { + PageFlagsNode *p = container_of(n, PageFlagsNode, itree); - for (i = 0; i < l1_sz; i++) { - target_ulong base = i << (v_l1_shift + TARGET_PAGE_BITS); - int rc = walk_memory_regions_1(&data, base, v_l2_levels, l1_map + i); + rc = fn(priv, n->start, n->last + 1, p->flags); if (rc != 0) { - return rc; + break; } } + mmap_unlock(); - return walk_memory_regions_end(&data, 0, 0); + return rc; } static int dump_region(void *priv, target_ulong start, - target_ulong end, unsigned long prot) + target_ulong end, unsigned long prot) { FILE *f = (FILE *)priv; - (void) fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx - " "TARGET_FMT_lx" %c%c%c\n", - start, end, end - start, - ((prot & PAGE_READ) ? 'r' : '-'), - ((prot & PAGE_WRITE) ? 'w' : '-'), - ((prot & PAGE_EXEC) ? 'x' : '-')); - + fprintf(f, TARGET_FMT_lx"-"TARGET_FMT_lx" "TARGET_FMT_lx" %c%c%c\n", + start, end, end - start, + ((prot & PAGE_READ) ? 'r' : '-'), + ((prot & PAGE_WRITE) ? 'w' : '-'), + ((prot & PAGE_EXEC) ? 'x' : '-')); return 0; } @@ -242,22 +197,134 @@ static int dump_region(void *priv, target_ulong start, void page_dump(FILE *f) { const int length = sizeof(target_ulong) * 2; - (void) fprintf(f, "%-*s %-*s %-*s %s\n", + + fprintf(f, "%-*s %-*s %-*s %s\n", length, "start", length, "end", length, "size", "prot"); walk_memory_regions(f, dump_region); } int page_get_flags(target_ulong address) { - PageDesc *p; + PageFlagsNode *p = pageflags_find(address, address); - p = page_find(address >> TARGET_PAGE_BITS); + /* + * See util/interval-tree.c re lockless lookups: no false positives but + * there are false negatives. If we find nothing, retry with the mmap + * lock acquired. + */ if (!p) { - return 0; + if (have_mmap_lock()) { + return 0; + } + mmap_lock(); + p = pageflags_find(address, address); + mmap_unlock(); + if (!p) { + return 0; + } } return p->flags; } +/* A subroutine of page_set_flags: insert a new node for [start,last]. */ +static void pageflags_create(target_ulong start, target_ulong last, int flags) +{ + PageFlagsNode *p = g_new(PageFlagsNode, 1); + + p->itree.start = start; + p->itree.last = last; + p->flags = flags; + interval_tree_insert(&p->itree, &pageflags_root); +} + +/* A subroutine of page_set_flags: remove everything in [start,last]. */ +static bool pageflags_unset(target_ulong start, target_ulong last) +{ + bool inval_tb = false; + + while (true) { + PageFlagsNode *p = pageflags_find(start, last); + target_ulong p_last; + + if (!p) { + break; + } + + if (p->flags & PAGE_EXEC) { + inval_tb = true; + } + + interval_tree_remove(&p->itree, &pageflags_root); + p_last = p->itree.last; + + if (p->itree.start < start) { + /* Truncate the node from the end, or split out the middle. */ + p->itree.last = start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + if (last < p_last) { + pageflags_create(last + 1, p_last, p->flags); + break; + } + } else if (p_last <= last) { + /* Range completely covers node -- remove it. */ + g_free(p); + } else { + /* Truncate the node from the start. */ + p->itree.start = last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + break; + } + } + + return inval_tb; +} + +/* + * A subroutine of page_set_flags: nothing overlaps [start,last], + * but check adjacent mappings and maybe merge into a single range. + */ +static void pageflags_create_merge(target_ulong start, target_ulong last, + int flags) +{ + PageFlagsNode *next = NULL, *prev = NULL; + + if (start > 0) { + prev = pageflags_find(start - 1, start - 1); + if (prev) { + if (prev->flags == flags) { + interval_tree_remove(&prev->itree, &pageflags_root); + } else { + prev = NULL; + } + } + } + if (last + 1 != 0) { + next = pageflags_find(last + 1, last + 1); + if (next) { + if (next->flags == flags) { + interval_tree_remove(&next->itree, &pageflags_root); + } else { + next = NULL; + } + } + } + + if (prev) { + if (next) { + prev->itree.last = next->itree.last; + g_free(next); + } else { + prev->itree.last = last; + } + interval_tree_insert(&prev->itree, &pageflags_root); + } else if (next) { + next->itree.start = start; + interval_tree_insert(&next->itree, &pageflags_root); + } else { + pageflags_create(start, last, flags); + } +} + /* * Allow the target to decide if PAGE_TARGET_[12] may be reset. * By default, they are not kept. @@ -267,6 +334,146 @@ int page_get_flags(target_ulong address) #endif #define PAGE_STICKY (PAGE_ANON | PAGE_PASSTHROUGH | PAGE_TARGET_STICKY) +/* A subroutine of page_set_flags: add flags to [start,last]. */ +static bool pageflags_set_clear(target_ulong start, target_ulong last, + int set_flags, int clear_flags) +{ + PageFlagsNode *p; + target_ulong p_start, p_last; + int p_flags, merge_flags; + bool inval_tb = false; + + restart: + p = pageflags_find(start, last); + if (!p) { + if (set_flags) { + pageflags_create_merge(start, last, set_flags); + } + goto done; + } + + p_start = p->itree.start; + p_last = p->itree.last; + p_flags = p->flags; + /* Using mprotect on a page does not change sticky bits. */ + merge_flags = (p_flags & ~clear_flags) | set_flags; + + /* + * Need to flush if an overlapping executable region + * removes exec, or adds write. + */ + if ((p_flags & PAGE_EXEC) + && (!(merge_flags & PAGE_EXEC) + || (merge_flags & ~p_flags & PAGE_WRITE))) { + inval_tb = true; + } + + /* + * If there is an exact range match, update and return without + * attempting to merge with adjacent regions. + */ + if (start == p_start && last == p_last) { + if (merge_flags) { + p->flags = merge_flags; + } else { + interval_tree_remove(&p->itree, &pageflags_root); + g_free(p); + } + goto done; + } + + /* + * If sticky bits affect the original mapping, then we must be more + * careful about the existing intervals and the separate flags. + */ + if (set_flags != merge_flags) { + if (p_start < start) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.last = start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + + if (last < p_last) { + if (merge_flags) { + pageflags_create(start, last, merge_flags); + } + pageflags_create(last + 1, p_last, p_flags); + } else { + if (merge_flags) { + pageflags_create(start, p_last, merge_flags); + } + if (p_last < last) { + start = p_last + 1; + goto restart; + } + } + } else { + if (start < p_start && set_flags) { + pageflags_create(start, p_start - 1, set_flags); + } + if (last < p_last) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.start = last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + if (merge_flags) { + pageflags_create(start, last, merge_flags); + } + } else { + if (merge_flags) { + p->flags = merge_flags; + } else { + interval_tree_remove(&p->itree, &pageflags_root); + g_free(p); + } + if (p_last < last) { + start = p_last + 1; + goto restart; + } + } + } + goto done; + } + + /* If flags are not changing for this range, incorporate it. */ + if (set_flags == p_flags) { + if (start < p_start) { + interval_tree_remove(&p->itree, &pageflags_root); + p->itree.start = start; + interval_tree_insert(&p->itree, &pageflags_root); + } + if (p_last < last) { + start = p_last + 1; + goto restart; + } + goto done; + } + + /* Maybe split out head and/or tail ranges with the original flags. */ + interval_tree_remove(&p->itree, &pageflags_root); + if (p_start < start) { + p->itree.last = start - 1; + interval_tree_insert(&p->itree, &pageflags_root); + + if (p_last < last) { + goto restart; + } + if (last < p_last) { + pageflags_create(last + 1, p_last, p_flags); + } + } else if (last < p_last) { + p->itree.start = last + 1; + interval_tree_insert(&p->itree, &pageflags_root); + } else { + g_free(p); + goto restart; + } + if (set_flags) { + pageflags_create(start, last, set_flags); + } + + done: + return inval_tb; +} + /* * Modify the flags of a page and invalidate the code if necessary. * The flag PAGE_WRITE_ORG is positioned automatically depending @@ -274,49 +481,41 @@ int page_get_flags(target_ulong address) */ void page_set_flags(target_ulong start, target_ulong end, int flags) { - target_ulong addr, len; - bool reset, inval_tb = false; + target_ulong last; + bool reset = false; + bool inval_tb = false; /* This function should never be called with addresses outside the guest address space. If this assert fires, it probably indicates a missing call to h2g_valid. */ - assert(end - 1 <= GUEST_ADDR_MAX); assert(start < end); + assert(end - 1 <= GUEST_ADDR_MAX); /* Only set PAGE_ANON with new mappings. */ assert(!(flags & PAGE_ANON) || (flags & PAGE_RESET)); assert_memory_lock(); start = start & TARGET_PAGE_MASK; end = TARGET_PAGE_ALIGN(end); + last = end - 1; - if (flags & PAGE_WRITE) { - flags |= PAGE_WRITE_ORG; - } - reset = !(flags & PAGE_VALID) || (flags & PAGE_RESET); - if (reset) { - page_reset_target_data(start, end); - } - flags &= ~PAGE_RESET; - - for (addr = start, len = end - start; - len != 0; - len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { - PageDesc *p = page_find_alloc(addr >> TARGET_PAGE_BITS, true); - - /* - * If the page was executable, but is reset, or is no longer - * executable, or has become writable, then invalidate any code. - */ - if ((p->flags & PAGE_EXEC) - && (reset || - !(flags & PAGE_EXEC) || - (flags & ~p->flags & PAGE_WRITE))) { - inval_tb = true; + if (!(flags & PAGE_VALID)) { + flags = 0; + } else { + reset = flags & PAGE_RESET; + flags &= ~PAGE_RESET; + if (flags & PAGE_WRITE) { + flags |= PAGE_WRITE_ORG; } - /* Using mprotect on a page does not change sticky bits. */ - p->flags = (reset ? 0 : p->flags & PAGE_STICKY) | flags; } + if (!flags || reset) { + page_reset_target_data(start, end); + inval_tb |= pageflags_unset(start, last); + } + if (flags) { + inval_tb |= pageflags_set_clear(start, last, flags, + ~(reset ? 0 : PAGE_STICKY)); + } if (inval_tb) { tb_invalidate_phys_range(start, end); } @@ -324,87 +523,89 @@ void page_set_flags(target_ulong start, target_ulong end, int flags) int page_check_range(target_ulong start, target_ulong len, int flags) { - PageDesc *p; - target_ulong end; - target_ulong addr; - - /* - * This function should never be called with addresses outside the - * guest address space. If this assert fires, it probably indicates - * a missing call to h2g_valid. - */ - if (TARGET_ABI_BITS > L1_MAP_ADDR_SPACE_BITS) { - assert(start < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS)); - } + target_ulong last; if (len == 0) { - return 0; - } - if (start + len - 1 < start) { - /* We've wrapped around. */ - return -1; + return 0; /* trivial length */ } - /* must do before we loose bits in the next step */ - end = TARGET_PAGE_ALIGN(start + len); - start = start & TARGET_PAGE_MASK; + last = start + len - 1; + if (last < start) { + return -1; /* wrap around */ + } + + while (true) { + PageFlagsNode *p = pageflags_find(start, last); + int missing; - for (addr = start, len = end - start; - len != 0; - len -= TARGET_PAGE_SIZE, addr += TARGET_PAGE_SIZE) { - p = page_find(addr >> TARGET_PAGE_BITS); if (!p) { - return -1; + return -1; /* entire region invalid */ } - if (!(p->flags & PAGE_VALID)) { - return -1; + if (start < p->itree.start) { + return -1; /* initial bytes invalid */ } - if ((flags & PAGE_READ) && !(p->flags & PAGE_READ)) { - return -1; + missing = flags & ~p->flags; + if (missing & PAGE_READ) { + return -1; /* page not readable */ } - if (flags & PAGE_WRITE) { + if (missing & PAGE_WRITE) { if (!(p->flags & PAGE_WRITE_ORG)) { + return -1; /* page not writable */ + } + /* Asking about writable, but has been protected: undo. */ + if (!page_unprotect(start, 0)) { return -1; } - /* unprotect the page if it was put read-only because it - contains translated code */ - if (!(p->flags & PAGE_WRITE)) { - if (!page_unprotect(addr, 0)) { - return -1; - } + /* TODO: page_unprotect should take a range, not a single page. */ + if (last - start < TARGET_PAGE_SIZE) { + return 0; /* ok */ } + start += TARGET_PAGE_SIZE; + continue; } + + if (last <= p->itree.last) { + return 0; /* ok */ + } + start = p->itree.last + 1; } - return 0; } -void page_protect(tb_page_addr_t page_addr) +void page_protect(tb_page_addr_t address) { - target_ulong addr; - PageDesc *p; + PageFlagsNode *p; + target_ulong start, last; int prot; - p = page_find(page_addr >> TARGET_PAGE_BITS); - if (p && (p->flags & PAGE_WRITE)) { - /* - * Force the host page as non writable (writes will have a page fault + - * mprotect overhead). - */ - page_addr &= qemu_host_page_mask; - prot = 0; - for (addr = page_addr; addr < page_addr + qemu_host_page_size; - addr += TARGET_PAGE_SIZE) { + assert_memory_lock(); - p = page_find(addr >> TARGET_PAGE_BITS); - if (!p) { - continue; - } + if (qemu_host_page_size <= TARGET_PAGE_SIZE) { + start = address & TARGET_PAGE_MASK; + last = start + TARGET_PAGE_SIZE - 1; + } else { + start = address & qemu_host_page_mask; + last = start + qemu_host_page_size - 1; + } + + p = pageflags_find(start, last); + if (!p) { + return; + } + prot = p->flags; + + if (unlikely(p->itree.last < last)) { + /* More than one protection region covers the one host page. */ + assert(TARGET_PAGE_SIZE < qemu_host_page_size); + while ((p = pageflags_next(p, start, last)) != NULL) { prot |= p->flags; - p->flags &= ~PAGE_WRITE; } - mprotect(g2h_untagged(page_addr), qemu_host_page_size, - (prot & PAGE_BITS) & ~PAGE_WRITE); + } + + if (prot & PAGE_WRITE) { + pageflags_set_clear(start, last, 0, PAGE_WRITE); + mprotect(g2h_untagged(start), qemu_host_page_size, + prot & (PAGE_READ | PAGE_EXEC) ? PROT_READ : PROT_NONE); } } @@ -417,10 +618,8 @@ void page_protect(tb_page_addr_t page_addr) */ int page_unprotect(target_ulong address, uintptr_t pc) { - unsigned int prot; + PageFlagsNode *p; bool current_tb_invalidated; - PageDesc *p; - target_ulong host_start, host_end, addr; /* * Technically this isn't safe inside a signal handler. However we @@ -429,40 +628,54 @@ int page_unprotect(target_ulong address, uintptr_t pc) */ mmap_lock(); - p = page_find(address >> TARGET_PAGE_BITS); - if (!p) { + p = pageflags_find(address, address); + + /* If this address was not really writable, nothing to do. */ + if (!p || !(p->flags & PAGE_WRITE_ORG)) { mmap_unlock(); return 0; } - /* - * If the page was really writable, then we change its - * protection back to writable. - */ - if (p->flags & PAGE_WRITE_ORG) { - current_tb_invalidated = false; - if (p->flags & PAGE_WRITE) { - /* - * If the page is actually marked WRITE then assume this is because - * this thread raced with another one which got here first and - * set the page to PAGE_WRITE and did the TB invalidate for us. - */ + current_tb_invalidated = false; + if (p->flags & PAGE_WRITE) { + /* + * If the page is actually marked WRITE then assume this is because + * this thread raced with another one which got here first and + * set the page to PAGE_WRITE and did the TB invalidate for us. + */ #ifdef TARGET_HAS_PRECISE_SMC - TranslationBlock *current_tb = tcg_tb_lookup(pc); - if (current_tb) { - current_tb_invalidated = tb_cflags(current_tb) & CF_INVALID; - } + TranslationBlock *current_tb = tcg_tb_lookup(pc); + if (current_tb) { + current_tb_invalidated = tb_cflags(current_tb) & CF_INVALID; + } #endif + } else { + target_ulong start, len, i; + int prot; + + if (qemu_host_page_size <= TARGET_PAGE_SIZE) { + start = address & TARGET_PAGE_MASK; + len = TARGET_PAGE_SIZE; + prot = p->flags | PAGE_WRITE; + pageflags_set_clear(start, start + len - 1, PAGE_WRITE, 0); + current_tb_invalidated = tb_invalidate_phys_page_unwind(start, pc); } else { - host_start = address & qemu_host_page_mask; - host_end = host_start + qemu_host_page_size; - + start = address & qemu_host_page_mask; + len = qemu_host_page_size; prot = 0; - for (addr = host_start; addr < host_end; addr += TARGET_PAGE_SIZE) { - p = page_find(addr >> TARGET_PAGE_BITS); - p->flags |= PAGE_WRITE; - prot |= p->flags; + for (i = 0; i < len; i += TARGET_PAGE_SIZE) { + target_ulong addr = start + i; + + p = pageflags_find(addr, addr); + if (p) { + prot |= p->flags; + if (p->flags & PAGE_WRITE_ORG) { + prot |= PAGE_WRITE; + pageflags_set_clear(addr, addr + TARGET_PAGE_SIZE - 1, + PAGE_WRITE, 0); + } + } /* * Since the content will be modified, we must invalidate * the corresponding translated code. @@ -470,15 +683,16 @@ int page_unprotect(target_ulong address, uintptr_t pc) current_tb_invalidated |= tb_invalidate_phys_page_unwind(addr, pc); } - mprotect((void *)g2h_untagged(host_start), qemu_host_page_size, - prot & PAGE_BITS); } - mmap_unlock(); - /* If current TB was invalidated return to main loop */ - return current_tb_invalidated ? 2 : 1; + if (prot & PAGE_EXEC) { + prot = (prot & ~PAGE_EXEC) | PAGE_READ; + } + mprotect((void *)g2h_untagged(start), len, prot & PAGE_BITS); } mmap_unlock(); - return 0; + + /* If current TB was invalidated return to main loop */ + return current_tb_invalidated ? 2 : 1; } static int probe_access_internal(CPUArchState *env, target_ulong addr, diff --git a/tests/tcg/multiarch/test-vma.c b/tests/tcg/multiarch/test-vma.c new file mode 100644 index 0000000000..2893d60334 --- /dev/null +++ b/tests/tcg/multiarch/test-vma.c @@ -0,0 +1,22 @@ +/* + * Test very large vma allocations. + * The qemu out-of-memory condition was within the mmap syscall itself. + * If the syscall actually returns with MAP_FAILED, the test succeeded. + */ +#include + +int main() +{ + int n = sizeof(size_t) == 4 ? 32 : 45; + + for (int i = 28; i < n; i++) { + size_t l = (size_t)1 << i; + void *p = mmap(0, l, PROT_NONE, + MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0); + if (p == MAP_FAILED) { + break; + } + munmap(p, l); + } + return 0; +} From patchwork Thu Oct 6 03:11:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612860 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1196993pvb; Wed, 5 Oct 2022 20:32:06 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4iIWjN6iaJxLyqGgM9VUelO95Z2dBkMGDgLrONriOqJTI9MQ8+uTq/StoHVImCqBiJwy0Q X-Received: by 2002:ac8:5c84:0:b0:35c:cc95:def with SMTP id r4-20020ac85c84000000b0035ccc950defmr2044153qta.552.1665027126113; Wed, 05 Oct 2022 20:32:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665027126; cv=none; d=google.com; s=arc-20160816; b=HloQRhbWGSdmsWjmWal+ZuNCdNkZzTt/T7KECHUpkWto9jlydG0fSh27axBuWJ5zPU TmteGbL+sxcpzuT2OeOEBho28aESTWydc0dkJ5TQ8EZklVWkzujOZ03NkLP2LcZL3rd1 nNbqFfrlxOSkT41/xXFSaN8NPOwiwiRQxNchwsT2New8pwyZ7KwK1eL5rKK3xvRLYR4z w5WJ+i/62+fPnKPsJmTzNBw4ZBxr7hDhjrJvNMEEij2IZLvG9lap23CdCtkQg4lxQWDI yRulQLDT5reAvUOv6jGaoUt+L79Upm3Q5kOOFdvdU2oOErZDGuspmSiiDaAAl7PNGEDY 9VNQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=WtgAF4aa4cibHWZqfaYpuHiDfoHPdK+LHp6GHtS1DoI=; b=IHLO3W5VtrheJ7naF/6gfJqKrkg7idgq6twEm9w8Sj1u9qTdEuBygDhnj0mJdsld16 iZxLCMQ7m8/H9awtHvDZqw9mHVz62kj2M+NXVhM13LWNnw1chrjAXRQkNn212M4SOCMe bN63IXo25oQ9fV6VFEm7pVn7fuRSfYC4C9Shi250SbhFlLgvL9WPWtoE127T6GMpvE/1 HMirytxZGaSAWxVKokQeDC+nOauVCy/zmcbERiV6YvDUEchvJ98faVfcgY16FtzaBGu1 8W+yQEcVbQ2F7R6LQp++0VXkFoIx9LGG13klsHcP1U5yB5z4hG+450Tg1BwqghSQMfCz v/oA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QuClX9uu; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id n20-20020a0ce554000000b004991e161b41si2562676qvm.406.2022.10.05.20.32.05 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:32:06 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=QuClX9uu; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:43168 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHc9-0002fs-KZ for patch@linaro.org; Wed, 05 Oct 2022 23:32:01 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52860) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIZ-0006Jq-2D for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:47 -0400 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]:42802) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIW-0006x0-J4 for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:46 -0400 Received: by mail-pl1-x633.google.com with SMTP id c24so519700pls.9 for ; Wed, 05 Oct 2022 20:11:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=WtgAF4aa4cibHWZqfaYpuHiDfoHPdK+LHp6GHtS1DoI=; b=QuClX9uuyCmXtJqn2vrs724gnDVbBhWp2BTcMBhXX9fYQWh5Vkn5bsyH/VgWtWZSYe pVPBP4vBIYWW3htFnweLfSqbF8G2T5SJiIBZo4yhnMYqtVimpf+up4CPI37JVIvqXm/e lCQevrvBs8+1lI6nsSCp2YEkYwoYHnl187tjbBdUgz541czeEKJ9YA22blZArdYbDL7y w1JYmGJyEtaTgaD7ncMvmbEiEWCKMNq1/XCZK3TkBikmd6bTElj10kEv8+zNsTQ64Mnr vkNoB/TiLSP+ZtmTvBM3K1dMWueh5jnFOdr/ElpfgHjp+hL/8GGLZUJd03RE1BreSeqR ugwg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=WtgAF4aa4cibHWZqfaYpuHiDfoHPdK+LHp6GHtS1DoI=; b=OZgo2+U6TFTfDx99KIsxurBw4WfLDQkWgVm3u8Ym4wziVN96oDXl1YItrf6MtRepTg cI1/W/yzYLtZzOYic8CBenyABOogmMcwaPLafjx63kUoGK3qjHkiJI6mCMGsBqsovTml fgNc8+2V581yP/7aWOToxCxQumMjB+hRfiNH5sQjEjU/JR06lwH8qCSJM79rK7KjkL4t d8fjp1RkpbbhHmKPJx4L+c1HdJRw6uW/NzmMk+u2B+oRECk1242bRhcjJ997ubGS22Tx XZIY1BFOLUpj1izEFLa/IsleH27XGXJql7ajhT64pZm9SRBmjQFPFY0tUEwrDIsjIXYb 3amA== X-Gm-Message-State: ACrzQf0v4Qi/gklHGxxNKzTv84Hrqdi1dZwX9DNCcoHaPOCE/toTE9jU lCpRJUqPYq8XX2jZyACAKqo9WoKuUCp7bg== X-Received: by 2002:a17:90a:4613:b0:20a:e03a:d292 with SMTP id w19-20020a17090a461300b0020ae03ad292mr2861564pjg.91.1665025903581; Wed, 05 Oct 2022 20:11:43 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:42 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 23/24] accel/tcg: Move PageDesc tree into tb-maint.c for system Date: Wed, 5 Oct 2022 20:11:12 -0700 Message-Id: <20221006031113.1139454-24-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::633; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x633.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" Now that PageDesc is not used for user-only, and for system it is only used for tb maintenance, move the implementation into tb-main.c appropriately ifdefed. We have not yet eliminated all references to PageDesc for user-only, so retain a typedef to the structure without definition. Signed-off-by: Richard Henderson --- accel/tcg/internal.h | 49 +++----------- accel/tcg/tb-maint.c | 130 ++++++++++++++++++++++++++++++++++++-- accel/tcg/translate-all.c | 95 ---------------------------- 3 files changed, 134 insertions(+), 140 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index c7e157d1cd..c6c9e02cfd 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -23,51 +23,13 @@ #define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) #endif -typedef struct PageDesc { +typedef struct PageDesc PageDesc; #ifndef CONFIG_USER_ONLY +struct PageDesc { QemuSpin lock; /* list of TBs intersecting this ram page */ uintptr_t first_tb; -#endif -} PageDesc; - -/* - * In system mode we want L1_MAP to be based on ram offsets, - * while in user mode we want it to be based on virtual addresses. - * - * TODO: For user mode, see the caveat re host vs guest virtual - * address spaces near GUEST_ADDR_MAX. - */ -#if !defined(CONFIG_USER_ONLY) -#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS -# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS -#else -# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS -#endif -#else -# define L1_MAP_ADDR_SPACE_BITS MIN(HOST_LONG_BITS, TARGET_ABI_BITS) -#endif - -/* Size of the L2 (and L3, etc) page tables. */ -#define V_L2_BITS 10 -#define V_L2_SIZE (1 << V_L2_BITS) - -/* - * L1 Mapping properties - */ -extern int v_l1_size; -extern int v_l1_shift; -extern int v_l2_levels; - -/* - * The bottom level has pointers to PageDesc, and is indexed by - * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. - */ -#define V_L1_MIN_BITS 4 -#define V_L1_MAX_BITS (V_L2_BITS + 3) -#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) - -extern void *l1_map[V_L1_MAX_SIZE]; +}; PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); @@ -76,6 +38,11 @@ static inline PageDesc *page_find(tb_page_addr_t index) return page_find_alloc(index, false); } +void page_table_config_init(void); +#else +static inline void page_table_config_init(void) { } +#endif + /* list iterators for lists of tagged pointers in TranslationBlock */ #define TB_FOR_EACH_TAGGED(head, tb, n, field) \ for (n = (head) & 1, tb = (TranslationBlock *)((head) & ~1); \ diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 694440cb4a..31d0a74aa9 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -127,6 +127,121 @@ static PageForEachNext foreach_tb_next(PageForEachNext tb, } #else +/* + * In system mode we want L1_MAP to be based on ram offsets. + */ +#if HOST_LONG_BITS < TARGET_PHYS_ADDR_SPACE_BITS +# define L1_MAP_ADDR_SPACE_BITS HOST_LONG_BITS +#else +# define L1_MAP_ADDR_SPACE_BITS TARGET_PHYS_ADDR_SPACE_BITS +#endif + +/* Size of the L2 (and L3, etc) page tables. */ +#define V_L2_BITS 10 +#define V_L2_SIZE (1 << V_L2_BITS) + +/* + * L1 Mapping properties + */ +static int v_l1_size; +static int v_l1_shift; +static int v_l2_levels; + +/* + * The bottom level has pointers to PageDesc, and is indexed by + * anything from 4 to (V_L2_BITS + 3) bits, depending on target page size. + */ +#define V_L1_MIN_BITS 4 +#define V_L1_MAX_BITS (V_L2_BITS + 3) +#define V_L1_MAX_SIZE (1 << V_L1_MAX_BITS) + +static void *l1_map[V_L1_MAX_SIZE]; + +void page_table_config_init(void) +{ + uint32_t v_l1_bits; + + assert(TARGET_PAGE_BITS); + /* The bits remaining after N lower levels of page tables. */ + v_l1_bits = (L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS) % V_L2_BITS; + if (v_l1_bits < V_L1_MIN_BITS) { + v_l1_bits += V_L2_BITS; + } + + v_l1_size = 1 << v_l1_bits; + v_l1_shift = L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS - v_l1_bits; + v_l2_levels = v_l1_shift / V_L2_BITS - 1; + + assert(v_l1_bits <= V_L1_MAX_BITS); + assert(v_l1_shift % V_L2_BITS == 0); + assert(v_l2_levels >= 0); +} + +PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) +{ + PageDesc *pd; + void **lp; + int i; + + /* Level 1. Always allocated. */ + lp = l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); + + /* Level 2..N-1. */ + for (i = v_l2_levels; i > 0; i--) { + void **p = qatomic_rcu_read(lp); + + if (p == NULL) { + void *existing; + + if (!alloc) { + return NULL; + } + p = g_new0(void *, V_L2_SIZE); + existing = qatomic_cmpxchg(lp, NULL, p); + if (unlikely(existing)) { + g_free(p); + p = existing; + } + } + + lp = p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); + } + + pd = qatomic_rcu_read(lp); + if (pd == NULL) { + void *existing; + + if (!alloc) { + return NULL; + } + pd = g_new0(PageDesc, V_L2_SIZE); +#ifndef CONFIG_USER_ONLY + { + int i; + + for (i = 0; i < V_L2_SIZE; i++) { + qemu_spin_init(&pd[i].lock); + } + } +#endif + existing = qatomic_cmpxchg(lp, NULL, pd); + if (unlikely(existing)) { +#ifndef CONFIG_USER_ONLY + { + int i; + + for (i = 0; i < V_L2_SIZE; i++) { + qemu_spin_destroy(&pd[i].lock); + } + } +#endif + g_free(pd); + pd = existing; + } + } + + return pd + (index & (V_L2_SIZE - 1)); +} /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void page_flush_tb_1(int level, void **lp) @@ -420,6 +535,17 @@ static void tb_phys_invalidate__locked(TranslationBlock *tb) qemu_thread_jit_execute(); } +#ifdef CONFIG_USER_ONLY +static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, + bool alloc) +{ + *ret_p1 = NULL; + *ret_p2 = NULL; +} +static inline void page_lock_tb(const TranslationBlock *tb) { } +static inline void page_unlock_tb(const TranslationBlock *tb) { } +#else static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc) { @@ -460,10 +586,6 @@ static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, } } -#ifdef CONFIG_USER_ONLY -static inline void page_lock_tb(const TranslationBlock *tb) { } -static inline void page_unlock_tb(const TranslationBlock *tb) { } -#else /* lock the page(s) of a TB in the correct acquisition order */ static void page_lock_tb(const TranslationBlock *tb) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 65e557592a..0537723b33 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -114,37 +114,8 @@ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) * BITS_PER_BYTE); -/* - * L1 Mapping properties - */ -int v_l1_size; -int v_l1_shift; -int v_l2_levels; - -void *l1_map[V_L1_MAX_SIZE]; - TBContext tb_ctx; -static void page_table_config_init(void) -{ - uint32_t v_l1_bits; - - assert(TARGET_PAGE_BITS); - /* The bits remaining after N lower levels of page tables. */ - v_l1_bits = (L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS) % V_L2_BITS; - if (v_l1_bits < V_L1_MIN_BITS) { - v_l1_bits += V_L2_BITS; - } - - v_l1_size = 1 << v_l1_bits; - v_l1_shift = L1_MAP_ADDR_SPACE_BITS - TARGET_PAGE_BITS - v_l1_bits; - v_l2_levels = v_l1_shift / V_L2_BITS - 1; - - assert(v_l1_bits <= V_L1_MAX_BITS); - assert(v_l1_shift % V_L2_BITS == 0); - assert(v_l2_levels >= 0); -} - /* Encode VAL as a signed leb128 sequence at P. Return P incremented past the encoded value. */ static uint8_t *encode_sleb128(uint8_t *p, target_long val) @@ -381,72 +352,6 @@ void page_init(void) #endif } -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) -{ - PageDesc *pd; - void **lp; - int i; - - /* Level 1. Always allocated. */ - lp = l1_map + ((index >> v_l1_shift) & (v_l1_size - 1)); - - /* Level 2..N-1. */ - for (i = v_l2_levels; i > 0; i--) { - void **p = qatomic_rcu_read(lp); - - if (p == NULL) { - void *existing; - - if (!alloc) { - return NULL; - } - p = g_new0(void *, V_L2_SIZE); - existing = qatomic_cmpxchg(lp, NULL, p); - if (unlikely(existing)) { - g_free(p); - p = existing; - } - } - - lp = p + ((index >> (i * V_L2_BITS)) & (V_L2_SIZE - 1)); - } - - pd = qatomic_rcu_read(lp); - if (pd == NULL) { - void *existing; - - if (!alloc) { - return NULL; - } - pd = g_new0(PageDesc, V_L2_SIZE); -#ifndef CONFIG_USER_ONLY - { - int i; - - for (i = 0; i < V_L2_SIZE; i++) { - qemu_spin_init(&pd[i].lock); - } - } -#endif - existing = qatomic_cmpxchg(lp, NULL, pd); - if (unlikely(existing)) { -#ifndef CONFIG_USER_ONLY - { - int i; - - for (i = 0; i < V_L2_SIZE; i++) { - qemu_spin_destroy(&pd[i].lock); - } - } -#endif - g_free(pd); - pd = existing; - } - } - - return pd + (index & (V_L2_SIZE - 1)); -} - /* In user-mode page locks aren't used; mmap_lock is enough */ #ifdef CONFIG_USER_ONLY struct page_collection * From patchwork Thu Oct 6 03:11:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 612863 Delivered-To: patch@linaro.org Received: by 2002:a17:522:c983:b0:460:3032:e3c4 with SMTP id kr3csp1198970pvb; Wed, 5 Oct 2022 20:37:33 -0700 (PDT) X-Google-Smtp-Source: AMsMyM40woymgu06BUuP8OEf1MMKXbrtzPb90KJOp/kmqMmkntfy1IftfkI1PUnEjhu8hciYe0fk X-Received: by 2002:a05:622a:189d:b0:35b:c3f7:495e with SMTP id v29-20020a05622a189d00b0035bc3f7495emr2046105qtc.416.1665027453240; Wed, 05 Oct 2022 20:37:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1665027453; cv=none; d=google.com; s=arc-20160816; b=fP9ADnYx0a7lRrSiuQmTRhkwGaFd3oOjjS7eotHntrYAR3ZwAOegU7tZWj/nRdQhST Z+8CUPqAZVDH7wjWhIP/FJyW6mdFu76Vfo6G51FHgpNGLy1xWWP8Ldy6EKbPLuO1wOOj KobxPDRlZYMa+ygredSC58zQABieuP686rlg3G7PExW2y+1ai34NtsSOOpHSUi0eigxf Mf7CpyEDsJIJtLttaM1HXsJBh/ruYlV1oayg9jztvzLwr4J4b+DvrC2yqvzEXlHrT/v+ OPEFpwE6eW3dbzkUX07mgAalvpznOgRUX8SMuz3E3OOSBqxLSbyN5JpXc4ZYmenY9l24 AIeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=0BzROSp0AUWeUi2M+A7kYgeMs8EOhgpxWc8y1USeQ6o=; b=YhHZo5mr5fhrpeKagu7OKNOVOYU6dyibhIJQdISKXmjKoQXDJY83OvPgDU9DIrXZUk Y1S0CByoO2ZKW17TtulAK+YoO9B5GNwd6Nr+qiOwxcHSOtQs31nXlLvbLttFenn1A8+M 33bnpYgLmm6/SS7g0b20Cp028zu1b7UDC5U7S00KdmMJ/yoiqqOn0AynGJ3yztZuKmiH +VyI7anqMEwr45ecO6QFxeqJRSwXOnG8vLWD9MmgZK3IMry9DRWT/iLtmpANB55za4QJ 3jQ8n51G+aBBCnQXXfQf2TnqutJkW7u/V4r4bPErHIGgTenyMFysBeMHxYS/XlaqsOyd 2R/A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nk6w4lbw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id c19-20020ac81e93000000b0035d4ceaa494si7015819qtm.66.2022.10.05.20.37.33 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Wed, 05 Oct 2022 20:37:33 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=nk6w4lbw; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1]:59042 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ogHhU-0008MD-OU for patch@linaro.org; Wed, 05 Oct 2022 23:37:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:52868) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ogHIc-0006SS-2c for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:50 -0400 Received: from mail-pj1-x102d.google.com ([2607:f8b0:4864:20::102d]:53947) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1ogHIY-00071r-Pt for qemu-devel@nongnu.org; Wed, 05 Oct 2022 23:11:49 -0400 Received: by mail-pj1-x102d.google.com with SMTP id fw14so609104pjb.3 for ; Wed, 05 Oct 2022 20:11:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date; bh=0BzROSp0AUWeUi2M+A7kYgeMs8EOhgpxWc8y1USeQ6o=; b=nk6w4lbwRoHXiA3bEHK78IxUtBbWw8rSiC18EWhVl/AwSwoUUDML0XApYFc5yg6uJ4 g5qeumNjNI7XyyFlJX42eAFsIjyhElhhPb9i2wh8JvL6+lW/xcQYSAE9FzY9TC8N26pW xodJBOrOJSJlXbyqJ2MwpHADMQeklRKpP5hNEWG2s13gsJItm+yLEzzQ2tmrbLRVSscQ 9qv0lGGFlRAEv/cLHv52jO5YDWdhoI0uy2WYlm+DSvvnjOJkUJQae+Rv10ncex+fXjYI +xzIBMZVAUH7mH33kPseCOSQuI66Rx9fFco+5frMRaLRSz74Y1pu3kXafHLPhWoeDn9t xeSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date; bh=0BzROSp0AUWeUi2M+A7kYgeMs8EOhgpxWc8y1USeQ6o=; b=OocEhN6q7C2rNGXLPfMpAo8YCm9aCeIXAJ8NvZMOmW+sbPUE95oh+Vp3FNT2Jlx7JI DBjW7SI+n7twuNIa1R+YZsMApeALqQMuJu0Pwfe/LJ8QF7YgI1zMlDH8iMS48DBnsO24 76Ra8VRKEfKgW09RYs3esY8gCb2spjtg+oJ6xg1kLrQjfMHSqlrzAa2kWHm56IGGlW1N 4WhOhixhJlvynlt8wBxVNxkFiqrnSKR2xpfGZr3/Ph+pZlixap0Q1xewTpmJOwUQFMie VfbQ2Ba8/LxD36lmop6I892/6TnKtoMq5tlnttQXiep/VVtFglkzg/MAZMyRHV7o1aRT e5hA== X-Gm-Message-State: ACrzQf05ManFMGu098UIWYESRYN3QtSS/fXuaByVjafsnbUhD0Fqu11q aW5gOc+YvHO3j13Gq+pkoe+QPboj+Vme0w== X-Received: by 2002:a17:903:1211:b0:178:9353:9e42 with SMTP id l17-20020a170903121100b0017893539e42mr2344782plh.45.1665025905064; Wed, 05 Oct 2022 20:11:45 -0700 (PDT) Received: from stoup.. ([2602:47:d49d:ec01:9ad0:4307:7d39:bb61]) by smtp.gmail.com with ESMTPSA id u128-20020a627986000000b0056281da3bcbsm58360pfc.149.2022.10.05.20.11.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 05 Oct 2022 20:11:44 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: alex.bennee@linaro.org, laurent@vivier.eu, pbonzini@redhat.com, imp@bsdimp.com, f4bug@amsat.org Subject: [PATCH 24/24] accel/tcg: Move remainder of page locking to tb-maint.c Date: Wed, 5 Oct 2022 20:11:13 -0700 Message-Id: <20221006031113.1139454-25-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221006031113.1139454-1-richard.henderson@linaro.org> References: <20221006031113.1139454-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::102d; envelope-from=richard.henderson@linaro.org; helo=mail-pj1-x102d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: "Qemu-devel" The only thing that still touches PageDesc in translate-all.c are some locking routines related to tb-maint.c which have not yet been moved. Do so now. Move some code up in tb-maint.c as well, to untangle the maze of ifdefs, and allow a sensible final ordering. Move some declarations from exec/translate-all.h to internal.h, as they are only used within accel/tcg/. Signed-off-by: Richard Henderson Reviewed-by: Alex Bennée --- accel/tcg/internal.h | 68 ++----- include/exec/translate-all.h | 6 - accel/tcg/tb-maint.c | 352 +++++++++++++++++++++++++++++++++-- accel/tcg/translate-all.c | 301 ------------------------------ 4 files changed, 352 insertions(+), 375 deletions(-) diff --git a/accel/tcg/internal.h b/accel/tcg/internal.h index c6c9e02cfd..6ce1437c58 100644 --- a/accel/tcg/internal.h +++ b/accel/tcg/internal.h @@ -23,62 +23,28 @@ #define assert_memory_lock() tcg_debug_assert(have_mmap_lock()) #endif -typedef struct PageDesc PageDesc; -#ifndef CONFIG_USER_ONLY -struct PageDesc { - QemuSpin lock; - /* list of TBs intersecting this ram page */ - uintptr_t first_tb; -}; - -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc); - -static inline PageDesc *page_find(tb_page_addr_t index) -{ - return page_find_alloc(index, false); -} - -void page_table_config_init(void); -#else -static inline void page_table_config_init(void) { } -#endif - -/* list iterators for lists of tagged pointers in TranslationBlock */ -#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ - for (n = (head) & 1, tb = (TranslationBlock *)((head) & ~1); \ - tb; tb = (TranslationBlock *)tb->field[n], n = (uintptr_t)tb & 1, \ - tb = (TranslationBlock *)((uintptr_t)tb & ~1)) - -#define TB_FOR_EACH_JMP(head_tb, tb, n) \ - TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) - -/* In user-mode page locks aren't used; mmap_lock is enough */ -#ifdef CONFIG_USER_ONLY -#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) -static inline void page_lock(PageDesc *pd) { } -static inline void page_unlock(PageDesc *pd) { } -#else -#ifdef CONFIG_DEBUG_TCG -void do_assert_page_locked(const PageDesc *pd, const char *file, int line); -#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE__) -#else -#define assert_page_locked(pd) -#endif -void page_lock(PageDesc *pd); -void page_unlock(PageDesc *pd); - -/* TODO: For now, still shared with translate-all.c for system mode. */ -typedef int PageForEachNext; -#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ - TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) - -#endif -#if !defined(CONFIG_USER_ONLY) && defined(CONFIG_DEBUG_TCG) +#if defined(CONFIG_SOFTMMU) && defined(CONFIG_DEBUG_TCG) void assert_no_pages_locked(void); #else static inline void assert_no_pages_locked(void) { } #endif +#ifdef CONFIG_USER_ONLY +static inline void page_table_config_init(void) { } +#else +void page_table_config_init(void); +#endif + +#ifdef CONFIG_SOFTMMU +struct page_collection; +void tb_invalidate_phys_page_fast(struct page_collection *pages, + tb_page_addr_t start, int len, + uintptr_t retaddr); +struct page_collection *page_collection_lock(tb_page_addr_t start, + tb_page_addr_t end); +void page_collection_unlock(struct page_collection *set); +#endif /* CONFIG_SOFTMMU */ + TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base, uint32_t flags, int cflags); diff --git a/include/exec/translate-all.h b/include/exec/translate-all.h index 3e9cb91565..88602ae8d8 100644 --- a/include/exec/translate-all.h +++ b/include/exec/translate-all.h @@ -23,12 +23,6 @@ /* translate-all.c */ -struct page_collection *page_collection_lock(tb_page_addr_t start, - tb_page_addr_t end); -void page_collection_unlock(struct page_collection *set); -void tb_invalidate_phys_page_fast(struct page_collection *pages, - tb_page_addr_t start, int len, - uintptr_t retaddr); void tb_invalidate_phys_page(tb_page_addr_t addr); void tb_check_watchpoint(CPUState *cpu, uintptr_t retaddr); diff --git a/accel/tcg/tb-maint.c b/accel/tcg/tb-maint.c index 31d0a74aa9..8fe2d322db 100644 --- a/accel/tcg/tb-maint.c +++ b/accel/tcg/tb-maint.c @@ -30,6 +30,15 @@ #include "internal.h" +/* List iterators for lists of tagged pointers in TranslationBlock. */ +#define TB_FOR_EACH_TAGGED(head, tb, n, field) \ + for (n = (head) & 1, tb = (TranslationBlock *)((head) & ~1); \ + tb; tb = (TranslationBlock *)tb->field[n], n = (uintptr_t)tb & 1, \ + tb = (TranslationBlock *)((uintptr_t)tb & ~1)) + +#define TB_FOR_EACH_JMP(head_tb, tb, n) \ + TB_FOR_EACH_TAGGED((head_tb)->jmp_list_head, tb, n, jmp_list_next) + static bool tb_cmp(const void *ap, const void *bp) { const TranslationBlock *a = ap; @@ -51,7 +60,28 @@ void tb_htable_init(void) qht_init(&tb_ctx.htable, tb_cmp, CODE_GEN_HTABLE_SIZE, mode); } +typedef struct PageDesc PageDesc; + #ifdef CONFIG_USER_ONLY + +/* + * In user-mode page locks aren't used; mmap_lock is enough. + */ +#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock()) + +static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, + PageDesc **ret_p2, tb_page_addr_t phys2, + bool alloc) +{ + *ret_p1 = NULL; + *ret_p2 = NULL; +} + +static inline void page_lock(PageDesc *pd) { } +static inline void page_unlock(PageDesc *pd) { } +static inline void page_lock_tb(const TranslationBlock *tb) { } +static inline void page_unlock_tb(const TranslationBlock *tb) { } + /* * For user-only, since we are protecting all of memory with a single lock, * and because the two pages of a TranslationBlock are always contiguous, @@ -157,6 +187,12 @@ static int v_l2_levels; static void *l1_map[V_L1_MAX_SIZE]; +struct PageDesc { + QemuSpin lock; + /* list of TBs intersecting this ram page */ + uintptr_t first_tb; +}; + void page_table_config_init(void) { uint32_t v_l1_bits; @@ -177,7 +213,7 @@ void page_table_config_init(void) assert(v_l2_levels >= 0); } -PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) +static PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) { PageDesc *pd; void **lp; @@ -243,6 +279,303 @@ PageDesc *page_find_alloc(tb_page_addr_t index, bool alloc) return pd + (index & (V_L2_SIZE - 1)); } +static inline PageDesc *page_find(tb_page_addr_t index) +{ + return page_find_alloc(index, false); +} + +/** + * struct page_entry - page descriptor entry + * @pd: pointer to the &struct PageDesc of the page this entry represents + * @index: page index of the page + * @locked: whether the page is locked + * + * This struct helps us keep track of the locked state of a page, without + * bloating &struct PageDesc. + * + * A page lock protects accesses to all fields of &struct PageDesc. + * + * See also: &struct page_collection. + */ +struct page_entry { + PageDesc *pd; + tb_page_addr_t index; + bool locked; +}; + +/** + * struct page_collection - tracks a set of pages (i.e. &struct page_entry's) + * @tree: Binary search tree (BST) of the pages, with key == page index + * @max: Pointer to the page in @tree with the highest page index + * + * To avoid deadlock we lock pages in ascending order of page index. + * When operating on a set of pages, we need to keep track of them so that + * we can lock them in order and also unlock them later. For this we collect + * pages (i.e. &struct page_entry's) in a binary search @tree. Given that the + * @tree implementation we use does not provide an O(1) operation to obtain the + * highest-ranked element, we use @max to keep track of the inserted page + * with the highest index. This is valuable because if a page is not in + * the tree and its index is higher than @max's, then we can lock it + * without breaking the locking order rule. + * + * Note on naming: 'struct page_set' would be shorter, but we already have a few + * page_set_*() helpers, so page_collection is used instead to avoid confusion. + * + * See also: page_collection_lock(). + */ +struct page_collection { + GTree *tree; + struct page_entry *max; +}; + +typedef int PageForEachNext; +#define PAGE_FOR_EACH_TB(start, end, pagedesc, tb, n) \ + TB_FOR_EACH_TAGGED((pagedesc)->first_tb, tb, n, page_next) + +#ifdef CONFIG_DEBUG_TCG + +static __thread GHashTable *ht_pages_locked_debug; + +static void ht_pages_locked_debug_init(void) +{ + if (ht_pages_locked_debug) { + return; + } + ht_pages_locked_debug = g_hash_table_new(NULL, NULL); +} + +static bool page_is_locked(const PageDesc *pd) +{ + PageDesc *found; + + ht_pages_locked_debug_init(); + found = g_hash_table_lookup(ht_pages_locked_debug, pd); + return !!found; +} + +static void page_lock__debug(PageDesc *pd) +{ + ht_pages_locked_debug_init(); + g_assert(!page_is_locked(pd)); + g_hash_table_insert(ht_pages_locked_debug, pd, pd); +} + +static void page_unlock__debug(const PageDesc *pd) +{ + bool removed; + + ht_pages_locked_debug_init(); + g_assert(page_is_locked(pd)); + removed = g_hash_table_remove(ht_pages_locked_debug, pd); + g_assert(removed); +} + +static void do_assert_page_locked(const PageDesc *pd, + const char *file, int line) +{ + if (unlikely(!page_is_locked(pd))) { + error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", + pd, file, line); + abort(); + } +} +#define assert_page_locked(pd) do_assert_page_locked(pd, __FILE__, __LINE__) + +void assert_no_pages_locked(void) +{ + ht_pages_locked_debug_init(); + g_assert(g_hash_table_size(ht_pages_locked_debug) == 0); +} + +#else /* !CONFIG_DEBUG_TCG */ + +static inline void page_lock__debug(const PageDesc *pd) { } +static inline void page_unlock__debug(const PageDesc *pd) { } +static inline void assert_page_locked(const PageDesc *pd) { } + +#endif /* CONFIG_DEBUG_TCG */ + +static void page_lock(PageDesc *pd) +{ + page_lock__debug(pd); + qemu_spin_lock(&pd->lock); +} + +static void page_unlock(PageDesc *pd) +{ + qemu_spin_unlock(&pd->lock); + page_unlock__debug(pd); +} + +static inline struct page_entry * +page_entry_new(PageDesc *pd, tb_page_addr_t index) +{ + struct page_entry *pe = g_malloc(sizeof(*pe)); + + pe->index = index; + pe->pd = pd; + pe->locked = false; + return pe; +} + +static void page_entry_destroy(gpointer p) +{ + struct page_entry *pe = p; + + g_assert(pe->locked); + page_unlock(pe->pd); + g_free(pe); +} + +/* returns false on success */ +static bool page_entry_trylock(struct page_entry *pe) +{ + bool busy; + + busy = qemu_spin_trylock(&pe->pd->lock); + if (!busy) { + g_assert(!pe->locked); + pe->locked = true; + page_lock__debug(pe->pd); + } + return busy; +} + +static void do_page_entry_lock(struct page_entry *pe) +{ + page_lock(pe->pd); + g_assert(!pe->locked); + pe->locked = true; +} + +static gboolean page_entry_lock(gpointer key, gpointer value, gpointer data) +{ + struct page_entry *pe = value; + + do_page_entry_lock(pe); + return FALSE; +} + +static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer data) +{ + struct page_entry *pe = value; + + if (pe->locked) { + pe->locked = false; + page_unlock(pe->pd); + } + return FALSE; +} + +/* + * Trylock a page, and if successful, add the page to a collection. + * Returns true ("busy") if the page could not be locked; false otherwise. + */ +static bool page_trylock_add(struct page_collection *set, tb_page_addr_t addr) +{ + tb_page_addr_t index = addr >> TARGET_PAGE_BITS; + struct page_entry *pe; + PageDesc *pd; + + pe = g_tree_lookup(set->tree, &index); + if (pe) { + return false; + } + + pd = page_find(index); + if (pd == NULL) { + return false; + } + + pe = page_entry_new(pd, index); + g_tree_insert(set->tree, &pe->index, pe); + + /* + * If this is either (1) the first insertion or (2) a page whose index + * is higher than any other so far, just lock the page and move on. + */ + if (set->max == NULL || pe->index > set->max->index) { + set->max = pe; + do_page_entry_lock(pe); + return false; + } + /* + * Try to acquire out-of-order lock; if busy, return busy so that we acquire + * locks in order. + */ + return page_entry_trylock(pe); +} + +static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer udata) +{ + tb_page_addr_t a = *(const tb_page_addr_t *)ap; + tb_page_addr_t b = *(const tb_page_addr_t *)bp; + + if (a == b) { + return 0; + } else if (a < b) { + return -1; + } + return 1; +} + +/* + * Lock a range of pages ([@start,@end[) as well as the pages of all + * intersecting TBs. + * Locking order: acquire locks in ascending order of page index. + */ +struct page_collection * +page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) +{ + struct page_collection *set = g_malloc(sizeof(*set)); + tb_page_addr_t index; + PageDesc *pd; + + start >>= TARGET_PAGE_BITS; + end >>= TARGET_PAGE_BITS; + g_assert(start <= end); + + set->tree = g_tree_new_full(tb_page_addr_cmp, NULL, NULL, + page_entry_destroy); + set->max = NULL; + assert_no_pages_locked(); + + retry: + g_tree_foreach(set->tree, page_entry_lock, NULL); + + for (index = start; index <= end; index++) { + TranslationBlock *tb; + PageForEachNext n; + + pd = page_find(index); + if (pd == NULL) { + continue; + } + if (page_trylock_add(set, index << TARGET_PAGE_BITS)) { + g_tree_foreach(set->tree, page_entry_unlock, NULL); + goto retry; + } + assert_page_locked(pd); + PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { + if (page_trylock_add(set, tb_page_addr0(tb)) || + (tb_page_addr1(tb) != -1 && + page_trylock_add(set, tb_page_addr1(tb)))) { + /* drop all locks, and reacquire in order */ + g_tree_foreach(set->tree, page_entry_unlock, NULL); + goto retry; + } + } + } + return set; +} + +void page_collection_unlock(struct page_collection *set) +{ + /* entries are unlocked and freed via page_entry_destroy */ + g_tree_destroy(set->tree); + g_free(set); +} + /* Set to NULL all the 'first_tb' fields in all PageDescs. */ static void page_flush_tb_1(int level, void **lp) { @@ -338,7 +671,6 @@ static void tb_page_remove(TranslationBlock *tb) tb_page_remove1(tb, pd); } } - #endif /* flush all the translation blocks */ @@ -536,15 +868,6 @@ static void tb_phys_invalidate__locked(TranslationBlock *tb) } #ifdef CONFIG_USER_ONLY -static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, - PageDesc **ret_p2, tb_page_addr_t phys2, - bool alloc) -{ - *ret_p1 = NULL; - *ret_p2 = NULL; -} -static inline void page_lock_tb(const TranslationBlock *tb) { } -static inline void page_unlock_tb(const TranslationBlock *tb) { } #else static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1, PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc) @@ -756,8 +1079,7 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc) #else /* * @p must be non-NULL. - * user-mode: call with mmap_lock held. - * !user-mode: call with all @pages locked. + * Call with all @pages locked. */ static void tb_invalidate_phys_page_range__locked(struct page_collection *pages, @@ -828,8 +1150,6 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages, /* * Invalidate all TBs which intersect with the target physical * address page @addr. - * - * Called with mmap_lock held for user-mode emulation */ void tb_invalidate_phys_page(tb_page_addr_t addr) { @@ -855,8 +1175,6 @@ void tb_invalidate_phys_page(tb_page_addr_t addr) * 'is_cpu_write_access' should be true if called from a real cpu write * access: the virtual CPU will exit the current TB if code is modified inside * this TB. - * - * Called with mmap_lock held for user-mode emulation. */ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) { diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c index 0537723b33..a5de92142b 100644 --- a/accel/tcg/translate-all.c +++ b/accel/tcg/translate-all.c @@ -63,52 +63,6 @@ #include "tb-context.h" #include "internal.h" -/* make various TB consistency checks */ - -/** - * struct page_entry - page descriptor entry - * @pd: pointer to the &struct PageDesc of the page this entry represents - * @index: page index of the page - * @locked: whether the page is locked - * - * This struct helps us keep track of the locked state of a page, without - * bloating &struct PageDesc. - * - * A page lock protects accesses to all fields of &struct PageDesc. - * - * See also: &struct page_collection. - */ -struct page_entry { - PageDesc *pd; - tb_page_addr_t index; - bool locked; -}; - -/** - * struct page_collection - tracks a set of pages (i.e. &struct page_entry's) - * @tree: Binary search tree (BST) of the pages, with key == page index - * @max: Pointer to the page in @tree with the highest page index - * - * To avoid deadlock we lock pages in ascending order of page index. - * When operating on a set of pages, we need to keep track of them so that - * we can lock them in order and also unlock them later. For this we collect - * pages (i.e. &struct page_entry's) in a binary search @tree. Given that the - * @tree implementation we use does not provide an O(1) operation to obtain the - * highest-ranked element, we use @max to keep track of the inserted page - * with the highest index. This is valuable because if a page is not in - * the tree and its index is higher than @max's, then we can lock it - * without breaking the locking order rule. - * - * Note on naming: 'struct page_set' would be shorter, but we already have a few - * page_set_*() helpers, so page_collection is used instead to avoid confusion. - * - * See also: page_collection_lock(). - */ -struct page_collection { - GTree *tree; - struct page_entry *max; -}; - /* Make sure all possible CPU event bits fit in tb->trace_vcpu_dstate */ QEMU_BUILD_BUG_ON(CPU_TRACE_DSTATE_MAX_EVENTS > sizeof_field(TranslationBlock, trace_vcpu_dstate) @@ -352,261 +306,6 @@ void page_init(void) #endif } -/* In user-mode page locks aren't used; mmap_lock is enough */ -#ifdef CONFIG_USER_ONLY -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) -{ - return NULL; -} - -void page_collection_unlock(struct page_collection *set) -{ } -#else /* !CONFIG_USER_ONLY */ - -#ifdef CONFIG_DEBUG_TCG - -static __thread GHashTable *ht_pages_locked_debug; - -static void ht_pages_locked_debug_init(void) -{ - if (ht_pages_locked_debug) { - return; - } - ht_pages_locked_debug = g_hash_table_new(NULL, NULL); -} - -static bool page_is_locked(const PageDesc *pd) -{ - PageDesc *found; - - ht_pages_locked_debug_init(); - found = g_hash_table_lookup(ht_pages_locked_debug, pd); - return !!found; -} - -static void page_lock__debug(PageDesc *pd) -{ - ht_pages_locked_debug_init(); - g_assert(!page_is_locked(pd)); - g_hash_table_insert(ht_pages_locked_debug, pd, pd); -} - -static void page_unlock__debug(const PageDesc *pd) -{ - bool removed; - - ht_pages_locked_debug_init(); - g_assert(page_is_locked(pd)); - removed = g_hash_table_remove(ht_pages_locked_debug, pd); - g_assert(removed); -} - -void do_assert_page_locked(const PageDesc *pd, const char *file, int line) -{ - if (unlikely(!page_is_locked(pd))) { - error_report("assert_page_lock: PageDesc %p not locked @ %s:%d", - pd, file, line); - abort(); - } -} - -void assert_no_pages_locked(void) -{ - ht_pages_locked_debug_init(); - g_assert(g_hash_table_size(ht_pages_locked_debug) == 0); -} - -#else /* !CONFIG_DEBUG_TCG */ - -static inline void page_lock__debug(const PageDesc *pd) { } -static inline void page_unlock__debug(const PageDesc *pd) { } - -#endif /* CONFIG_DEBUG_TCG */ - -void page_lock(PageDesc *pd) -{ - page_lock__debug(pd); - qemu_spin_lock(&pd->lock); -} - -void page_unlock(PageDesc *pd) -{ - qemu_spin_unlock(&pd->lock); - page_unlock__debug(pd); -} - -static inline struct page_entry * -page_entry_new(PageDesc *pd, tb_page_addr_t index) -{ - struct page_entry *pe = g_malloc(sizeof(*pe)); - - pe->index = index; - pe->pd = pd; - pe->locked = false; - return pe; -} - -static void page_entry_destroy(gpointer p) -{ - struct page_entry *pe = p; - - g_assert(pe->locked); - page_unlock(pe->pd); - g_free(pe); -} - -/* returns false on success */ -static bool page_entry_trylock(struct page_entry *pe) -{ - bool busy; - - busy = qemu_spin_trylock(&pe->pd->lock); - if (!busy) { - g_assert(!pe->locked); - pe->locked = true; - page_lock__debug(pe->pd); - } - return busy; -} - -static void do_page_entry_lock(struct page_entry *pe) -{ - page_lock(pe->pd); - g_assert(!pe->locked); - pe->locked = true; -} - -static gboolean page_entry_lock(gpointer key, gpointer value, gpointer data) -{ - struct page_entry *pe = value; - - do_page_entry_lock(pe); - return FALSE; -} - -static gboolean page_entry_unlock(gpointer key, gpointer value, gpointer data) -{ - struct page_entry *pe = value; - - if (pe->locked) { - pe->locked = false; - page_unlock(pe->pd); - } - return FALSE; -} - -/* - * Trylock a page, and if successful, add the page to a collection. - * Returns true ("busy") if the page could not be locked; false otherwise. - */ -static bool page_trylock_add(struct page_collection *set, tb_page_addr_t addr) -{ - tb_page_addr_t index = addr >> TARGET_PAGE_BITS; - struct page_entry *pe; - PageDesc *pd; - - pe = g_tree_lookup(set->tree, &index); - if (pe) { - return false; - } - - pd = page_find(index); - if (pd == NULL) { - return false; - } - - pe = page_entry_new(pd, index); - g_tree_insert(set->tree, &pe->index, pe); - - /* - * If this is either (1) the first insertion or (2) a page whose index - * is higher than any other so far, just lock the page and move on. - */ - if (set->max == NULL || pe->index > set->max->index) { - set->max = pe; - do_page_entry_lock(pe); - return false; - } - /* - * Try to acquire out-of-order lock; if busy, return busy so that we acquire - * locks in order. - */ - return page_entry_trylock(pe); -} - -static gint tb_page_addr_cmp(gconstpointer ap, gconstpointer bp, gpointer udata) -{ - tb_page_addr_t a = *(const tb_page_addr_t *)ap; - tb_page_addr_t b = *(const tb_page_addr_t *)bp; - - if (a == b) { - return 0; - } else if (a < b) { - return -1; - } - return 1; -} - -/* - * Lock a range of pages ([@start,@end[) as well as the pages of all - * intersecting TBs. - * Locking order: acquire locks in ascending order of page index. - */ -struct page_collection * -page_collection_lock(tb_page_addr_t start, tb_page_addr_t end) -{ - struct page_collection *set = g_malloc(sizeof(*set)); - tb_page_addr_t index; - PageDesc *pd; - - start >>= TARGET_PAGE_BITS; - end >>= TARGET_PAGE_BITS; - g_assert(start <= end); - - set->tree = g_tree_new_full(tb_page_addr_cmp, NULL, NULL, - page_entry_destroy); - set->max = NULL; - assert_no_pages_locked(); - - retry: - g_tree_foreach(set->tree, page_entry_lock, NULL); - - for (index = start; index <= end; index++) { - TranslationBlock *tb; - PageForEachNext n; - - pd = page_find(index); - if (pd == NULL) { - continue; - } - if (page_trylock_add(set, index << TARGET_PAGE_BITS)) { - g_tree_foreach(set->tree, page_entry_unlock, NULL); - goto retry; - } - assert_page_locked(pd); - PAGE_FOR_EACH_TB(unused, unused, pd, tb, n) { - if (page_trylock_add(set, tb_page_addr0(tb)) || - (tb_page_addr1(tb) != -1 && - page_trylock_add(set, tb_page_addr1(tb)))) { - /* drop all locks, and reacquire in order */ - g_tree_foreach(set->tree, page_entry_unlock, NULL); - goto retry; - } - } - } - return set; -} - -void page_collection_unlock(struct page_collection *set) -{ - /* entries are unlocked and freed via page_entry_destroy */ - g_tree_destroy(set->tree); - g_free(set); -} - -#endif /* !CONFIG_USER_ONLY */ - /* Called with mmap_lock held for user mode emulation. */ TranslationBlock *tb_gen_code(CPUState *cpu, target_ulong pc, target_ulong cs_base,