From patchwork Tue Sep 12 14:04:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721768 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1664594wrp; Tue, 12 Sep 2023 07:06:28 -0700 (PDT) X-Google-Smtp-Source: AGHT+IF1slc+PVUb35eMc1Fpzmyomp570Jlw4SOWBIkgxZvo4qiDqKvc4yh5FKWw/C309KzWlvgT X-Received: by 2002:a05:6102:6d3:b0:450:cf23:b6 with SMTP id m19-20020a05610206d300b00450cf2300b6mr5774669vsg.21.1694527588622; Tue, 12 Sep 2023 07:06:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527588; cv=none; d=google.com; s=arc-20160816; b=OAxLmPU1DyAIqSJD2LaF2/UKO31DIL0VgbaLtFlxnIaXfQcC4CO1M9RVYSIz/uRCZx 5lufIlQwm905i7uKsCnnAi1LNE+ULrNk/qRAVF8ehWOun56bOnGBHRcGcoYkarhWmHI7 A5tnaUnFifoXUviHEP8xEidzuO+atzw6vyxgEjoK0tKwFza8TqDJrgv1pQAArHo36Ec8 piHvUvVk0w2vq0XlcozVzL9lfIXr+mkB1np98w6zCHbEYZBq9TkBf5uvVgs0CjiTxWG7 qJxiqq2+Z5F4ZTXLgt4U5enIovtYdArtedp0QUO+YQFuDBIHXbq90YFUosj3NUFD6sCs QyYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=AZ7z+PF4zRsdie2W5wNUk9pY8AcE3Vlvunka/Km+EXQ=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=ROvism4Vy6eFYBV1BlDriAbKFCNJ2JCUXx1R63NYFVzh5wHGx0EBgqKeI8SI3jCNqe i6QwEAob7jCFaIYOHrVGXtW9E5t7LdFKa8TUFueP9gkFF3MlwrFRxE76x4UJp5LEh2cP O/YKTK7dQL25pPgx2Re1YccWi2fALLUcgrXOO5wojeS4e68rqoE8WznF0UotQUkx0LQ7 HchnXzlUza6JPik3N5WZudcB7TQay/74pxS0nYGO8y8lUAa1tbjiRL2nhWWscqaC6aJS ZlrlGriwFCNV3cYRBg0kuGd0fgqh9u4otpTdGQHCTj4LKpYhdqU67jbGbsikMVagJ0uJ R3rg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=k50PTPYc; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w1-20020a67e241000000b0045074173b71si779730vse.239.2023.09.12.07.06.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:06:28 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=k50PTPYc; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg40s-0008FC-DM; Tue, 12 Sep 2023 10:05:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40a-000871-8v for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:04:54 -0400 Received: from mail-wr1-x429.google.com ([2a00:1450:4864:20::429]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40N-0003pb-Nm for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:04:44 -0400 Received: by mail-wr1-x429.google.com with SMTP id ffacd0b85a97d-31f915c3c42so3143133f8f.0 for ; Tue, 12 Sep 2023 07:04:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527476; x=1695132276; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=AZ7z+PF4zRsdie2W5wNUk9pY8AcE3Vlvunka/Km+EXQ=; b=k50PTPYcziYiy1SUZB5An8TkPyoepEGaY393+MoDPpqFBdqJYoUQSA4/N+NPmF9oe/ c8w1hNbVPBmhF1iTKSl4AfwT+a+2RpiTcVHMi6YRTQuXs3WpPnyi+aAsHIqlxXqW3kPq b6e75/GSrgRnMYb4dTmM0/n+N2AxWwEJ0OyLXq3eBc1ChlPDkg+F5ziRvDmLYpsbJCz2 guQ4ZXs94jmFRWtAGXZAH4PILfJSqlAf7FR6ESO/duPWRhxjLb0LWVEmjZdsefRYfT3e 5g0vP+/Qd7UzFR+USe+TF2fWvOEcjy4bvf54J3IcIFWQVPD2dkZc864bKpw00LC1tSBH PA6w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527476; x=1695132276; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=AZ7z+PF4zRsdie2W5wNUk9pY8AcE3Vlvunka/Km+EXQ=; b=JIfWWZ6KTCT0ALs7Oz9pbl3FQAwFKVIty9Up462084mXnPpW9ogRdq7l782D46i8z2 sMx+tK0jANi2dhOmomuDwi7PxshDTNffmZ6oGM2k950BeYJS5ZAUm3JO77SIVQqco4mJ 3Elc8NL4m3wbx4P1ze3ivaaYzsXDidarqB7dTBtcK4SmTFrEqrE6DShgAqF6KpOHW6P/ 11ZW4tl/yU9RGr+CszompLyVGF2mpe6rB+tt62sR3r0Y+FYg4w90+SHoSQr4DjSVqeZW efYbCaQqM26ijgOiUz33GpiDovQvnUzv9fVEE+OLjKhL5MwRIgS+l0+s+g+7tFEiyxTa Xbkg== X-Gm-Message-State: AOJu0Yysi/QUWcwYjvl8kltXGytdqLJh5E0AyX2sEymEaVhwS2fDoCcZ 5snKVCvENB/efDMc2BHmqJSl/w== X-Received: by 2002:a5d:5103:0:b0:315:8f4f:81b8 with SMTP id s3-20020a5d5103000000b003158f4f81b8mr10928695wrt.50.1694527476476; Tue, 12 Sep 2023 07:04:36 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:36 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 01/12] target/arm: Don't skip MTE checks for LDRT/STRT at EL0 Date: Tue, 12 Sep 2023 15:04:23 +0100 Message-Id: <20230912140434.1333369-2-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::429; envelope-from=peter.maydell@linaro.org; helo=mail-wr1-x429.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org The LDRT/STRT "unprivileged load/store" instructions behave like normal ones if executed at EL0. We handle this correctly for the load/store semantics, but get the MTE checking wrong. We always look at s->mte_active[is_unpriv] to see whether we should be doing MTE checks, but in hflags.c when we set the TB flags that will be used to fill the mte_active[] array we only set the MTE0_ACTIVE bit if UNPRIV is true (i.e. we are not at EL0). This means that a LDRT at EL0 will see s->mte_active[1] as 0, and will not do MTE checks even when MTE is enabled. To avoid the translate-time code having to do an explicit check on s->unpriv to see if it is OK to index into the mte_active[] array, duplicate MTE_ACTIVE into MTE0_ACTIVE when UNPRIV is false. (This isn't a very serious bug because generally nobody executes LDRT/STRT at EL0, because they have no use there.) Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/tcg/hflags.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c index 616c5fa7237..ea642384f5a 100644 --- a/target/arm/tcg/hflags.c +++ b/target/arm/tcg/hflags.c @@ -306,6 +306,15 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, && !(env->pstate & PSTATE_TCO) && (sctlr & (el == 0 ? SCTLR_TCF0 : SCTLR_TCF))) { DP_TBFLAG_A64(flags, MTE_ACTIVE, 1); + if (!EX_TBFLAG_A64(flags, UNPRIV)) { + /* + * In non-unpriv contexts (eg EL0), unpriv load/stores + * act like normal ones; duplicate the MTE info to + * avoid translate-a64.c having to check UNPRIV to see + * whether it is OK to index into MTE_ACTIVE[]. + */ + DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1); + } } } /* And again for unprivileged accesses, if required. */ From patchwork Tue Sep 12 14:04:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721779 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1666319wrp; Tue, 12 Sep 2023 07:08:43 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEKxC/J1ixzDor4DwEIGnjyFFA1Fw1EroVhILCVJ2y1sF+mTMDOVU0XPRhc9GuwHS092DLT X-Received: by 2002:a05:6358:9142:b0:134:e964:134c with SMTP id r2-20020a056358914200b00134e964134cmr13914896rwr.11.1694527723231; Tue, 12 Sep 2023 07:08:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527723; cv=none; d=google.com; s=arc-20160816; b=gbOmh1grvTO3Z8TVYVz4AY/9B/StFbiK4rkL//8LSycfIFGMVoTYyRONI7iGgLrRvZ TQPULvnOKgRs06R3mQLewT74KZyntqJzdLD8PNdjfmcWr3MTrY92La77n8XJmGJHg2YY mI0+eD5CMNieaXzWgpJCKfSAZ+Q0oD0siNsu1LGCweD49xRUGp4Q5tU90oGVNFkbURAQ eu999FfZcNeucy3pPE44UqmYtNUDAzILsna7BoVmLcbfWt1RHscowQ36W76sCWzjl3R3 Fb9Cshl6rpK080tf8pYIyk1zpN0B2K5bnS73QMYyszUtQjs6OqzVUAjksWsLnPKjPoUo fxSQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=RHcf7tHzqDKzr5rxsbuc/JHbMcICd4o46jDSREKWJM4=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=W4wTklspdORJiPhzO497112/dkqfDjdu0hsx+dT1/VWfOlJqEZkyof1YRbwnE2kxHB JKCUfuQJje94PBxpjdpWBykDgQgq4oJzJLnc2eT4/YZjHHun0jcat8YAoTWEF1AkOPeQ g6+vhhgXEfKD0IstjUCWE5JL5bjvWhn8j4Qn/kqgvOeaRs0SZev0yWhZqVYjoVImd5MJ CS+8kCwYhlG+yubl3EjluBtC+J52OFOW0Yj9rqLSwLdbNmKFfGrqIrcFHYYCCOUnjqaw Hsqky6e0xl2BFpTTGew/41/s/+aTcEG/vFHrDqmX8xa33QBypE5pTW7zsyHOGe7cGARD 05pw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OkoAMpdu; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id y2-20020a0cc542000000b0063d3d253db7si6217928qvi.442.2023.09.12.07.08.43 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:08:43 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OkoAMpdu; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg412-0008PZ-1h; Tue, 12 Sep 2023 10:05:20 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40a-000870-8C for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:04:54 -0400 Received: from mail-wm1-x336.google.com ([2a00:1450:4864:20::336]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40N-0003pk-On for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:04:45 -0400 Received: by mail-wm1-x336.google.com with SMTP id 5b1f17b1804b1-402d0eda361so66395555e9.0 for ; Tue, 12 Sep 2023 07:04:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527477; x=1695132277; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=RHcf7tHzqDKzr5rxsbuc/JHbMcICd4o46jDSREKWJM4=; b=OkoAMpduUyO5ez4VZyIW+3/ckECAaaOFWRGkXsPrd56QXettAf33tedNo0Rn9gbETa XDaCSq3DvIVaHEOM4WFnIWeixNEZLEC/F+EP/nxOF+q+A+uhfN2jMSfl9zE0/giqMh5Y XU2OYzPXL2N4+9NJnA4YnOTlSCH4uAtobEN36wob1RFVxuR7KxbLomQkTX7BLVyx4bZ6 CM7CKqIjysv3cyKUZuMujW57GEgCZ7xxsGEmNs2dDRIX6WzDTs5zGM8vk57+SO2ZeV59 WLskhjCmYYd9zjUDX5DnV6LO6CznRM5ft/aAzgEACI6U9CZF6hngCY0ou0vKQMGyoV+D YieA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527477; x=1695132277; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RHcf7tHzqDKzr5rxsbuc/JHbMcICd4o46jDSREKWJM4=; b=e9MZtRCC1rks3bHMhjKKx0g/itZ+QkyavfOsa32EH/v2+xPymxdEbZAAFAfyOmUdu0 VcZJBtaC6cQEBC1FOK84kYLUNjESipO52j4VtdySIQRlhnwqxMR9oYb5pCTv4kALzOGW W0SR3pvChZN1VnBFfQxAG9ZdZDrG6076mS/rgFUxOIFWKdcc1gfZFPgXkVqAQoaa5w9w 44Xjv1eAF7ZAMDhXu3h1X6/H2+usdPZPqdO7X1UgljUKC1rX/rqtlJxi5eV6bAMpRBI3 5MJcTUX95H0Wbv68KuPLyDr/OZIxg7nCltAy9mgJVCEMYJyDGi6ttgHp3CNo1+GFkMqV M1sg== X-Gm-Message-State: AOJu0YxbGqOzizvsuqmPe9EDL/MXM9qyGo6uyubx/hZOT/9OqP7MwvOE UZaxenjYxS7k4dU0dIcA3IXa8UkUGGuQKPxJey4= X-Received: by 2002:adf:ea4b:0:b0:31d:da10:e47e with SMTP id j11-20020adfea4b000000b0031dda10e47emr11333879wrn.57.1694527476927; Tue, 12 Sep 2023 07:04:36 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:36 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 02/12] target/arm: Implement FEAT_MOPS enable bits Date: Tue, 12 Sep 2023 15:04:24 +0100 Message-Id: <20230912140434.1333369-3-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::336; envelope-from=peter.maydell@linaro.org; helo=mail-wm1-x336.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org FEAT_MOPS defines a handful of new enable bits: * HCRX_EL2.MSCEn, SCTLR_EL1.MSCEn, SCTLR_EL2.MSCen: define whether the new insns should UNDEF or not * HCRX_EL2.MCE2: defines whether memops exceptions from EL1 should be taken to EL1 or EL2 Since we don't sanitise what bits can be written for the SCTLR registers, we only need to handle the new bits in HCRX_EL2, and define SCTLR_MSCEN for the new SCTLR bit value. The precedence of "HCRX bits acts as 0 if SCR_EL3.HXEn is 0" versus "bit acts as 1 if EL2 disabled" is not clear from the register definition text, but it is clear in the CheckMOPSEnabled() pseudocode(), so we follow that. We'll have to check whether other bits we need to implement in future follow the same logic or not. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/cpu.h | 6 ++++++ target/arm/helper.c | 28 +++++++++++++++++++++------- 2 files changed, 27 insertions(+), 7 deletions(-) diff --git a/target/arm/cpu.h b/target/arm/cpu.h index bc7a69a8753..266c1a9ea1b 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -1315,6 +1315,7 @@ void pmu_init(ARMCPU *cpu); #define SCTLR_EnIB (1U << 30) /* v8.3, AArch64 only */ #define SCTLR_EnIA (1U << 31) /* v8.3, AArch64 only */ #define SCTLR_DSSBS_32 (1U << 31) /* v8.5, AArch32 only */ +#define SCTLR_MSCEN (1ULL << 33) /* FEAT_MOPS */ #define SCTLR_BT0 (1ULL << 35) /* v8.5-BTI */ #define SCTLR_BT1 (1ULL << 36) /* v8.5-BTI */ #define SCTLR_ITFSB (1ULL << 37) /* v8.5-MemTag */ @@ -4281,6 +4282,11 @@ static inline bool isar_feature_aa64_doublelock(const ARMISARegisters *id) return FIELD_SEX64(id->id_aa64dfr0, ID_AA64DFR0, DOUBLELOCK) >= 0; } +static inline bool isar_feature_aa64_mops(const ARMISARegisters *id) +{ + return FIELD_EX64(id->id_aa64isar2, ID_AA64ISAR2, MOPS); +} + /* * Feature tests for "does this exist in either 32-bit or 64-bit?" */ diff --git a/target/arm/helper.c b/target/arm/helper.c index 594985d7c8c..83620787b45 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -5980,7 +5980,10 @@ static void hcrx_write(CPUARMState *env, const ARMCPRegInfo *ri, { uint64_t valid_mask = 0; - /* No features adding bits to HCRX are implemented. */ + /* FEAT_MOPS adds MSCEn and MCE2 */ + if (cpu_isar_feature(aa64_mops, env_archcpu(env))) { + valid_mask |= HCRX_MSCEN | HCRX_MCE2; + } /* Clear RES0 bits. */ env->cp15.hcrx_el2 = value & valid_mask; @@ -6009,13 +6012,24 @@ uint64_t arm_hcrx_el2_eff(CPUARMState *env) { /* * The bits in this register behave as 0 for all purposes other than - * direct reads of the register if: - * - EL2 is not enabled in the current security state, - * - SCR_EL3.HXEn is 0. + * direct reads of the register if SCR_EL3.HXEn is 0. + * If EL2 is not enabled in the current security state, then the + * bit may behave as if 0, or as if 1, depending on the bit. + * For the moment, we treat the EL2-disabled case as taking + * priority over the HXEn-disabled case. This is true for the only + * bit for a feature which we implement where the answer is different + * for the two cases (MSCEn for FEAT_MOPS). + * This may need to be revisited for future bits. */ - if (!arm_is_el2_enabled(env) - || (arm_feature(env, ARM_FEATURE_EL3) - && !(env->cp15.scr_el3 & SCR_HXEN))) { + if (!arm_is_el2_enabled(env)) { + uint64_t hcrx = 0; + if (cpu_isar_feature(aa64_mops, env_archcpu(env))) { + /* MSCEn behaves as 1 if EL2 is not enabled */ + hcrx |= HCRX_MSCEN; + } + return hcrx; + } + if (arm_feature(env, ARM_FEATURE_EL3) && !(env->cp15.scr_el3 & SCR_HXEN)) { return 0; } return env->cp15.hcrx_el2; From patchwork Tue Sep 12 14:04:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721767 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1664532wrp; Tue, 12 Sep 2023 07:06:24 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEwc1mLlOpxryJvPYvUJbiKCFmqdh/Fl53wjtbtZAjQFa93Jz/K1OqZWfK+TcBAtjNbvsvd X-Received: by 2002:a0c:e146:0:b0:655:d86b:6710 with SMTP id c6-20020a0ce146000000b00655d86b6710mr8221723qvl.49.1694527584659; Tue, 12 Sep 2023 07:06:24 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527584; cv=none; d=google.com; s=arc-20160816; b=duZh9tj8qjT2KuCOGCUumrxXEbrax7g0PIrD5TB72Mp5nma5HMB4+0HZ8EcATeizsA q5y0B69oEEdNElrmIowzLmxsdbcbF2Z/heQAPYMQ3hU2beMCV8LPc/ZI44d73pVCjk8B S58VtwI7p0RlCy8zGGHrmyzkD25A0nOJo/W4hGjh9Qrj801G9jYUBqnEUPn1vml2b+MZ C2HSyryuXMPap4/EWV4uidyRMjaW3vObmIKtfXbepvFkwWaupG1FczywTB8suuY9QRzY ix5+mntWxIiBk4tFLvYslpVKMJJohmIVt0DeTI4IQm6/Uia/QjeQti5W8d7rNWEo6tbi T/XQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=atIAvZgXx4HeuCEPVGi0hJRO3lnJSiWXdbUnoe0XyYk=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=YxaSMRRfPAWsYtiBpF/9G7kBItY03M9TpILzkjlvq29UHQA6Z+f4h14YuqBFzzW5NS 8zzV9ZyKTZ5f7mo7QJHax8iWRO8PvLOEirciHmbG7XqEo2Vk9RUJSmDsvsgXuMzT7hQ3 O5BerIJ/8PUfGvz7GcRi75rwd5TLFTO4BHeWQM2xU0e0aXx9sWZj6oO0d/+7rX4pHSPr FwuN8FErTbQaJyc/BnW5BcgC9MHBMqMuWNp+QeGyVL91gPCjpwshQgHKfTDbLWc8QcG9 2XGLOyC09mVSdL5yIcgDKSSY/i+/5OSEc4kYJyXp3wr1QDXDX2VmgUdHnyOpOkEfnPbe z3WQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="tcussc/x"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id w12-20020a0cdf8c000000b0064cfcd5f575si6365835qvl.478.2023.09.12.07.06.24 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:06:24 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="tcussc/x"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg40z-0008MT-Kz; Tue, 12 Sep 2023 10:05:17 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40c-00087F-5G for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:04:54 -0400 Received: from mail-wm1-x32e.google.com ([2a00:1450:4864:20::32e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40O-0003po-W3 for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:04:47 -0400 Received: by mail-wm1-x32e.google.com with SMTP id 5b1f17b1804b1-401d67434daso61013845e9.2 for ; Tue, 12 Sep 2023 07:04:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527478; x=1695132278; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=atIAvZgXx4HeuCEPVGi0hJRO3lnJSiWXdbUnoe0XyYk=; b=tcussc/xOdjJ4qd1eQJN3EvwQ1SWyF42EIcdO+/0Nv0satndZN/5D8r2d8B1JoYXLN UW3joNgAJza6hVNPytvKYrz5WwvZmYy2Tsm2GVxdzUnRcQE3HWv310u5kjReYb/xrfBn KtKyheq3TWXuVVfYpOq/NYoc/1OaTpzUEgMzxHKJRb+fjuOc7ZcppMyXISOTKktjanbd 6Q+F2hbF1sHEABuQp+7Yn8LrPyRx/wph1SpRzv4AfLNBpvJo/8zdB1LVPFl0BM8COVt6 upNh8dzK+6V6ueR1JmtYIsjyuE8KGetJDygsNaZJIGCFLHVbb/cLEyAtE+GqOrvrABYP rZpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527478; x=1695132278; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=atIAvZgXx4HeuCEPVGi0hJRO3lnJSiWXdbUnoe0XyYk=; b=pw5BoLNQSwGZp+S/fXQWd1D80VcA1H+jajtgRkV/0GPoNKFe/4sWhvA8/XietYtkkc KwkFG0Z+Sp5ZLJ7uKuNsSDzI+UwSnzwGSIz+/Dkl1BaW41oosgR4Xzxs+OBXPLSeW3Cn zt8LvS9f6v0SzxnZGUa7916TEbBMxTuxFmZes94pS8zFBSEyWi4OLNVlh3qdSVZS28Pf XSLPGc94GDv3OuKXdVKvwzkODQ9cvlyjBGxiIKdUVyk9eXTvVfdOTP98asj321PVVZ/a gYBaOkmgG5GGE8a/gnLJW90lpzYzTLsmuE0dBks6bOsOzVKZmOgXE7A/VPIaAGmNQ27J +CnQ== X-Gm-Message-State: AOJu0Ywo6U6cB4uaQWsfCSa8N30MxDNmahTad6lesvr/j5dRNEu99OMk Y5KTZmaPcd6BxMDexiXEGzFQmA== X-Received: by 2002:a5d:4c85:0:b0:31a:d551:c2c0 with SMTP id z5-20020a5d4c85000000b0031ad551c2c0mr10117743wrs.57.1694527477876; Tue, 12 Sep 2023 07:04:37 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:37 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 03/12] target/arm: Pass unpriv bool to get_a64_user_mem_index() Date: Tue, 12 Sep 2023 15:04:25 +0100 Message-Id: <20230912140434.1333369-4-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32e; envelope-from=peter.maydell@linaro.org; helo=mail-wm1-x32e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org In every place that we call the get_a64_user_mem_index() function we do it like this: memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s); Refactor so the caller passes in the bool that says whether they want the 'unpriv' or 'normal' mem_index rather than having to do the ?: themselves. Signed-off-by: Peter Maydell --- I'm about to add another use of this function which would otherwise also end up doing this same ?: expression... Reviewed-by: Richard Henderson Reviewed-by: Philippe Mathieu-Daudé --- target/arm/tcg/translate-a64.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c index 1dd86edae13..24afd929144 100644 --- a/target/arm/tcg/translate-a64.c +++ b/target/arm/tcg/translate-a64.c @@ -105,9 +105,17 @@ void a64_translate_init(void) } /* - * Return the core mmu_idx to use for A64 "unprivileged load/store" insns + * Return the core mmu_idx to use for A64 load/store insns which + * have a "unprivileged load/store" variant. Those insns access + * EL0 if executed from an EL which has control over EL0 (usually + * EL1) but behave like normal loads and stores if executed from + * elsewhere (eg EL3). + * + * @unpriv : true for the unprivileged encoding; false for the + * normal encoding (in which case we will return the same + * thing as get_mem_index(). */ -static int get_a64_user_mem_index(DisasContext *s) +static int get_a64_user_mem_index(DisasContext *s, bool unpriv) { /* * If AccType_UNPRIV is not used, the insn uses AccType_NORMAL, @@ -115,7 +123,7 @@ static int get_a64_user_mem_index(DisasContext *s) */ ARMMMUIdx useridx = s->mmu_idx; - if (s->unpriv) { + if (unpriv && s->unpriv) { /* * We have pre-computed the condition for AccType_UNPRIV. * Therefore we should never get here with a mmu_idx for @@ -3088,7 +3096,7 @@ static void op_addr_ldst_imm_pre(DisasContext *s, arg_ldst_imm *a, if (!a->p) { tcg_gen_addi_i64(*dirty_addr, *dirty_addr, offset); } - memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s); + memidx = get_a64_user_mem_index(s, a->unpriv); *clean_addr = gen_mte_check1_mmuidx(s, *dirty_addr, is_store, a->w || a->rn != 31, mop, a->unpriv, memidx); @@ -3109,7 +3117,7 @@ static bool trans_STR_i(DisasContext *s, arg_ldst_imm *a) { bool iss_sf, iss_valid = !a->w; TCGv_i64 clean_addr, dirty_addr, tcg_rt; - int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s); + int memidx = get_a64_user_mem_index(s, a->unpriv); MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN); op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, true, mop); @@ -3127,7 +3135,7 @@ static bool trans_LDR_i(DisasContext *s, arg_ldst_imm *a) { bool iss_sf, iss_valid = !a->w; TCGv_i64 clean_addr, dirty_addr, tcg_rt; - int memidx = a->unpriv ? get_a64_user_mem_index(s) : get_mem_index(s); + int memidx = get_a64_user_mem_index(s, a->unpriv); MemOp mop = finalize_memop(s, a->sz + a->sign * MO_SIGN); op_addr_ldst_imm_pre(s, a, &clean_addr, &dirty_addr, a->imm, false, mop); From patchwork Tue Sep 12 14:04:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721775 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1666117wrp; Tue, 12 Sep 2023 07:08:27 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG7dEp/J37Om9R+eclEj23yjZmQTTWSNmz2i4A/GvHFEWOA77wgVEfekejBojNUOoEU0LTs X-Received: by 2002:a05:620a:4443:b0:76d:a784:9685 with SMTP id w3-20020a05620a444300b0076da7849685mr3807016qkp.28.1694527707152; Tue, 12 Sep 2023 07:08:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527707; cv=none; d=google.com; s=arc-20160816; b=qyjyiJhmdEkdKhS16edrtveFxHMmhlnAkpbsHXmmDQkaJJTzeo+j3dADbAmRN84zRE fsigtJ0MTmyX9+PqnJ6gJbc7tGw2h1jYy2gfDUHYusSSsVvG5OEOM3BNst2H8S3ptb/U /RFOtHnn7GOKgZfVULwZ97rwgz1BlCYvgM0veGRIGNzHIMa08ItDfxEZ080t93igpG5S LaM7JT/TrDwFh+os+hRpYUpt7NZdbTQAwhYCHActoh48NJodbLx8T2q8HJKInvvA8VJE UKW5pb0lUbO3syV1UzKCecMHHqK5J5EGciE1Gwvu7ZHVvtaY/un6J9Jygu192moFgUT4 zJJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=q4Dgx+OSBmpEBFmVsndMXcNNk4c9tAUi91q/Iq2oNR4=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=mvgCd6JAK2Uiv+Qa7x6+BK4rZh0g9j6tj6iNq1A6D9WIbJPTfup4ffPh7vH8CgLbfC 62MAh6MBcwjqturIyQcqDwm111MEC42nylGZEL/qS0Tiornwg5uZOvJv/Y0NF5BI6I3D yw28VaHSOUYF+CMKV+sOo0cjlBbWOslf73gApsNIoU5VkgAkqMGcxRKMsX9mYqx4vm4T i4oDzd9BCTbXyq/SC8Zu5VPkMNNOzqHdUauMzaekLAvmg8VDuLpBVIInzd+0CQQtmZvn AzDqqnUfmnASJ/vj75noHx//r6w1OMyeUE1qOuk4ZMKJXt4j/AMW5Vg6+hYStssznNWq ypAw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Gbo34vf/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 16-20020a05620a04d000b0076cd82547d6si6132688qks.166.2023.09.12.07.08.26 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:08:27 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="Gbo34vf/"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg410-0008OM-DA; Tue, 12 Sep 2023 10:05:18 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40e-00088s-1D for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:01 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40P-0003q5-7b for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:04:51 -0400 Received: by mail-wr1-x431.google.com with SMTP id ffacd0b85a97d-31f7400cb74so4820171f8f.2 for ; Tue, 12 Sep 2023 07:04:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527479; x=1695132279; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=q4Dgx+OSBmpEBFmVsndMXcNNk4c9tAUi91q/Iq2oNR4=; b=Gbo34vf/Wvsq5Ttl1uvcIEZ0nlxBsUoznL+ZrWoYBj2mcqV9Kf6iKugNsu3VJTKazx P2YuHPPShGEsBonJedmL6irDnxCDzaYhavnw4dPv4t+1iknCwUYKitVVnJ9iZW4lNamg rNL/Eq37/q3SabYLk68qLo5OSgLPCmnrjq025EDDWZNv/HzUh+/MvRGRuMOPJFDwuQTm IvemU4UhrCc37eSmKeOyIu/gBy4vhCQSI8y+Pql/hhdMyPfNaozNyPS/3VmbY+Qbdc0T VhS4AqfDMRlM1xXZlg98QZPKce5+HYvB3UbHxO4UdF51WSZj644VStk4WESy14bWboXl 1HzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527479; x=1695132279; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=q4Dgx+OSBmpEBFmVsndMXcNNk4c9tAUi91q/Iq2oNR4=; b=Xv/WyevJjYHYIcJgI3labo5g/RNHzBV0LsbmL7RA1NrnDqw/R+rFWYayAz8oNzHSbn 3yDjO8Za7KHKPi6SU5e9CtxtqWVhjprzQ2IJAWVAlkJA6to0heq556sClStoj6skNxIw pxjeQvh+30ggZEPR/XJmtONnSBUyqH3htGxGQ9I5H6PAWT3wYocSZSXLd6bstUkn9T/I Ej6yxHrZSYeuM/KIRCKvNcm4lpnUljTrHu5EIDkh+zBHx++cg7nybsgaBAt+HKrOrHg0 MEAhjLlHHycDkAMimNQGQLiT5xErg40weKAsPwPPN1d0CuTq+85EqeQPeBSh1fo8VFs8 7qBA== X-Gm-Message-State: AOJu0YzRhbQfG+qcDVdCr5MSjuMW7KM7h4JiDjS941TDxrBxIKy5Lecd opbQWF3fj7Bwvlip939tIKkzBgfrgEx/udHa7SE= X-Received: by 2002:a05:6000:1d91:b0:31f:85dc:1114 with SMTP id bk17-20020a0560001d9100b0031f85dc1114mr9593544wrb.33.1694527478634; Tue, 12 Sep 2023 07:04:38 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:38 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 04/12] target/arm: Define syndrome function for MOPS exceptions Date: Tue, 12 Sep 2023 15:04:26 +0100 Message-Id: <20230912140434.1333369-5-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=peter.maydell@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, T_SPF_TEMPERROR=0.01 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org The FEAT_MOPS memory operations can raise a Memory Copy or Memory Set exception if a copy or set instruction is executed when the CPU register state is not correct for that instruction. Define the usual syn_* function that constructs the syndrome register value for these exceptions. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/syndrome.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/target/arm/syndrome.h b/target/arm/syndrome.h index 8a6b8f8162a..5d34755508d 100644 --- a/target/arm/syndrome.h +++ b/target/arm/syndrome.h @@ -58,6 +58,7 @@ enum arm_exception_class { EC_DATAABORT = 0x24, EC_DATAABORT_SAME_EL = 0x25, EC_SPALIGNMENT = 0x26, + EC_MOP = 0x27, EC_AA32_FPTRAP = 0x28, EC_AA64_FPTRAP = 0x2c, EC_SERROR = 0x2f, @@ -334,4 +335,15 @@ static inline uint32_t syn_serror(uint32_t extra) return (EC_SERROR << ARM_EL_EC_SHIFT) | ARM_EL_IL | extra; } +static inline uint32_t syn_mop(bool is_set, bool is_setg, int options, + bool epilogue, bool wrong_option, bool option_a, + int destreg, int srcreg, int sizereg) +{ + return (EC_MOP << ARM_EL_EC_SHIFT) | ARM_EL_IL | + (is_set << 24) | (is_setg << 23) | (options << 19) | + (epilogue << 18) | (wrong_option << 17) | (option_a << 16) | + (destreg << 10) | (srcreg << 5) | sizereg; +} + + #endif /* TARGET_ARM_SYNDROME_H */ From patchwork Tue Sep 12 14:04:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721772 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1665052wrp; Tue, 12 Sep 2023 07:07:06 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEQuYFv+9T+q82i7FG6JhZvmUSeYm7ihYttdmdxnw+HMeZkAky9uuECF1tv1h3jB3MOPaPA X-Received: by 2002:a05:620a:44c9:b0:76c:a957:b2f1 with SMTP id y9-20020a05620a44c900b0076ca957b2f1mr16656484qkp.66.1694527626280; Tue, 12 Sep 2023 07:07:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527626; cv=none; d=google.com; s=arc-20160816; b=LLWSQawnC3nna5U++8ov3wW7JtxbJ+L/1Z6lqOkYeoo0SHa3kJZlzOEpR6orcYDPJH puBSL7t3WJfTzxKJFG2GqxBuY4BaiTwGKYdKfYMdOT8O0J+pseQ9n/Fj+ZSkvLdq6jlB A6dTH3l23SYJiY/RUCq+Kyos7tgXDmATzPi31HuyaRFevEoeqolVeuDhAgLVcQGfzoDR rimtzaQ9ebCmZNR2akjig8eUFbE9XLlH9ICXberkwTQXF13lMhXX+oluW7ZA9VzzKL3x iu8WH4OC88t07D2w3CXxYDC4L4gLfchlOuXUzIAbXxwN4dzaCQYEcsHIO+ZcLAMpDL0c OpmA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=D1EL7OiWD9A0VKuolOshZyjZ0Z6pN9Kqa/FPwzECxhk=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=BNm/NB6DpcUVjtshhKSx8Xlm58ju+2+W2YfctFwUnVT+UmZ1MNvyV1bgBWVMSvpHnK l+Y6XTroweTETlKMj4DJFuaZluk2PaA/HpfguizdtxNHvcDI0frzTyrKqVyEhV+NYPVS 5MmqxO2hRDHaDFX/2ZwWGM9sb1B0YvYzBwqp9grKHrnE4P8cprbL+ash/JQNqu6Za4t9 C/q6PSMpipV8shAHn6QhNVXF4KaucZ9vdsNySw4oOz1ItZO/6Tb+s2iXxNM5sIzrdP5U 3ToUEFbJ7pFNBSArZir0Trmsn1x8JowZkvvx4UL9JUwzZGc+KTB8EWg4ID0osn1RpaEu LAZQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eSYmUH1q; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id ef14-20020a05620a808e00b0077076a46511si6006863qkb.255.2023.09.12.07.07.06 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:07:06 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=eSYmUH1q; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg417-00008n-Nh; Tue, 12 Sep 2023 10:05:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40l-0008Cs-UX for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:06 -0400 Received: from mail-wr1-x42c.google.com ([2a00:1450:4864:20::42c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40d-0003qD-8k for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:02 -0400 Received: by mail-wr1-x42c.google.com with SMTP id ffacd0b85a97d-313e742a787so3586496f8f.1 for ; Tue, 12 Sep 2023 07:04:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527480; x=1695132280; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=D1EL7OiWD9A0VKuolOshZyjZ0Z6pN9Kqa/FPwzECxhk=; b=eSYmUH1qm07IE+JQVdD1GLSqHmcBVWc9ikwtZXrKqF1WfAZcgNYp7vXb6QwlCZV/uI 3/zWahF/QirsdCuXt+F9BUfvyRlxHdHxI5kbB2YACkmYM6F4lCesBI+nZOfYcumJ5nUj b35WNUXIlBlhJ+mvuy+uqCIb6i4TGCD2ZtICn/GE6l4ruVHYXL8aUJ/qPUGa/x2ZHURp XglHwwa2/nnXt1Mu02FmPsD0orqpFqSULUxlETCIYtqF93tv7J9DgaXHkieUfXpsGec7 SS2pKIyjuhtnjRpotwKirhsLhPRqX0IWmJ4JtyuCrlTVWbPOe3EYPTrrMx9ddJwLJHBB Eb+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527480; x=1695132280; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=D1EL7OiWD9A0VKuolOshZyjZ0Z6pN9Kqa/FPwzECxhk=; b=UKaUIWnuA4qvHg1BgURcxtMdl5mLyfPF8Y4bjVB/fcs9lcR9MXNjP5Ad6uWVWS1VnS UvLRUWyzBZh2XblYsYZmFgA9bbUFYng+Bw1nqbA1fzEytJ6PwyywsldNM4kCV6AMuXMt 3DpTo/jnOvBJEZDV1GJINuecLcFJki/ToCKeD02jSTVQf8dryMwfCLjs2wrNC+O7Wb0c V+UMMGC8PQVZEC1HEJCeBEZQSm0PR9Bx7oZpcV36QfWA+meUeMe359xiKQsUZIvvY4/w IwPAFHvIdt4d58yp43BUIWkbUC5/SRm4YEXBS9SsisOvj4SK8zH+8rdxzozXaAi90uN0 QHOA== X-Gm-Message-State: AOJu0YxRS2zOKSJPXDlJtFYOWxKEoPR6K9gCXcsWJhDr2KJdq+qyaeIL lWs8tnNprijtnS1gB6CDV/hPQw== X-Received: by 2002:a5d:440f:0:b0:314:c6b:b9a2 with SMTP id z15-20020a5d440f000000b003140c6bb9a2mr2019596wrq.13.1694527480383; Tue, 12 Sep 2023 07:04:40 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:40 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 05/12] target/arm: New function allocation_tag_mem_probe() Date: Tue, 12 Sep 2023 15:04:27 +0100 Message-Id: <20230912140434.1333369-6-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42c; envelope-from=peter.maydell@linaro.org; helo=mail-wr1-x42c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org For the FEAT_MOPS operations, the existing allocation_tag_mem() function almost does what we want, but it will take a watchpoint exception even for an ra == 0 probe request, and it requires that the caller guarantee that the memory is accessible. For FEAT_MOPS we want a function that will not take any kind of exception, and will return NULL for the not-accessible case. Rename allocation_tag_mem() to allocation_tag_mem_probe() and add an extra 'probe' argument that lets us distinguish these cases; allocation_tag_mem() is now a wrapper that always passes 'false'. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/tcg/mte_helper.c | 48 ++++++++++++++++++++++++++++--------- 1 file changed, 37 insertions(+), 11 deletions(-) diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c index e2494f73cf3..303bcc7fd84 100644 --- a/target/arm/tcg/mte_helper.c +++ b/target/arm/tcg/mte_helper.c @@ -50,13 +50,14 @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude) } /** - * allocation_tag_mem: + * allocation_tag_mem_probe: * @env: the cpu environment * @ptr_mmu_idx: the addressing regime to use for the virtual address * @ptr: the virtual address for which to look up tag memory * @ptr_access: the access to use for the virtual address * @ptr_size: the number of bytes in the normal memory access * @tag_access: the access to use for the tag memory + * @probe: true to merely probe, never taking an exception * @ra: the return address for exception handling * * Our tag memory is formatted as a sequence of little-endian nibbles. @@ -65,15 +66,25 @@ static int choose_nonexcluded_tag(int tag, int offset, uint16_t exclude) * for the higher addr. * * Here, resolve the physical address from the virtual address, and return - * a pointer to the corresponding tag byte. Exit with exception if the - * virtual address is not accessible for @ptr_access. + * a pointer to the corresponding tag byte. * * If there is no tag storage corresponding to @ptr, return NULL. + * + * If the page is inaccessible for @ptr_access, or has a watchpoint, there are + * three options: + * (1) probe = true, ra = 0 : pure probe -- we return NULL if the page is not + * accessible, and do not take watchpoint traps. The calling code must + * handle those cases in the right priority compared to MTE traps. + * (2) probe = false, ra = 0 : probe, no fault expected -- the caller guarantees + * that the page is going to be accessible. We will take watchpoint traps. + * (3) probe = false, ra != 0 : non-probe -- we will take both memory access + * traps and watchpoint traps. + * (probe = true, ra != 0 is invalid and will assert.) */ -static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, - uint64_t ptr, MMUAccessType ptr_access, - int ptr_size, MMUAccessType tag_access, - uintptr_t ra) +static uint8_t *allocation_tag_mem_probe(CPUARMState *env, int ptr_mmu_idx, + uint64_t ptr, MMUAccessType ptr_access, + int ptr_size, MMUAccessType tag_access, + bool probe, uintptr_t ra) { #ifdef CONFIG_USER_ONLY uint64_t clean_ptr = useronly_clean_ptr(ptr); @@ -81,6 +92,8 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, uint8_t *tags; uintptr_t index; + assert(!(probe && ra)); + if (!(flags & (ptr_access == MMU_DATA_STORE ? PAGE_WRITE_ORG : PAGE_READ))) { cpu_loop_exit_sigsegv(env_cpu(env), ptr, ptr_access, !(flags & PAGE_VALID), ra); @@ -111,12 +124,16 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, * exception for inaccessible pages, and resolves the virtual address * into the softmmu tlb. * - * When RA == 0, this is for mte_probe. The page is expected to be - * valid. Indicate to probe_access_flags no-fault, then assert that - * we received a valid page. + * When RA == 0, this is either a pure probe or a no-fault-expected probe. + * Indicate to probe_access_flags no-fault, then either return NULL + * for the pure probe, or assert that we received a valid page for the + * no-fault-expected probe. */ flags = probe_access_full(env, ptr, 0, ptr_access, ptr_mmu_idx, ra == 0, &host, &full, ra); + if (probe && (flags & TLB_INVALID_MASK)) { + return NULL; + } assert(!(flags & TLB_INVALID_MASK)); /* If the virtual page MemAttr != Tagged, access unchecked. */ @@ -157,7 +174,7 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, } /* Any debug exception has priority over a tag check exception. */ - if (unlikely(flags & TLB_WATCHPOINT)) { + if (!probe && unlikely(flags & TLB_WATCHPOINT)) { int wp = ptr_access == MMU_DATA_LOAD ? BP_MEM_READ : BP_MEM_WRITE; assert(ra != 0); cpu_check_watchpoint(env_cpu(env), ptr, ptr_size, attrs, wp, ra); @@ -199,6 +216,15 @@ static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, #endif } +static uint8_t *allocation_tag_mem(CPUARMState *env, int ptr_mmu_idx, + uint64_t ptr, MMUAccessType ptr_access, + int ptr_size, MMUAccessType tag_access, + uintptr_t ra) +{ + return allocation_tag_mem_probe(env, ptr_mmu_idx, ptr, ptr_access, + ptr_size, tag_access, false, ra); +} + uint64_t HELPER(irg)(CPUARMState *env, uint64_t rn, uint64_t rm) { uint16_t exclude = extract32(rm | env->cp15.gcr_el1, 0, 16); From patchwork Tue Sep 12 14:04:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721769 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1664730wrp; Tue, 12 Sep 2023 07:06:39 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHep5ipmm/lu6F2XqZRUlm9MahsX1q06zdL0wPNiaDNsl2xRRore8tvcceJHcxPmEndAB9q X-Received: by 2002:a05:620a:2149:b0:76f:e89:576d with SMTP id m9-20020a05620a214900b0076f0e89576dmr11979607qkm.37.1694527599095; Tue, 12 Sep 2023 07:06:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527599; cv=none; d=google.com; s=arc-20160816; b=D2CZwcNdyjmaTu9jZQMLeCpBeNnCTwgqPMpWV2WAIwvvcZHKS5xwaloX/AKyjgoeH+ RWSIVG3Co5NKY8GuWi2PTdDCkxyYrGMkLu28oXCwYOa4xt0OZKc4l45CyP/TM4JmAVJ+ V9H0c3Z0aaEOiESdUG68VXAeEPZhHrNWnhfmVJhkam6ra6Kg3jlFbmGvLayojh8rKnEj DavKUYxpdfrh6p1987hY6dnodB42NBZPvKqKj1aDmmCrRbiVp8peahhsvlYntCIGhRt1 5mLBht5sZhqNhJwPsjODNFZ32hHX8HjrJkDgUn37A3qPWVRMrKnITyrZu1hXTzUXMnhJ JdyQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=xWeyt5EAvAefsdum7L+OXTH/QqXTlR2KtCZ49pT3dfQ=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=GpE3MaHcy8egkEKVm2P7eD6lJeDplPuuh7FllBvhUErXvjBo+T05LTltoTUr51xAYB x6pUk3rJFPwURWIbhJFcNWa7EsNy2ozVggD288rTc+NOUEvUMbpMNEV4Dq2OfziNvCiD ncnVBnO/ZRBWd3hM6WQT8aI2rODjN9nK91Xsw33CyIpZKtJYKxSB5xMfp3Kh85sNVuIh vWI0YBrEekYVxJXyizawzH8+fGJBpdLOhbbbgVCO381hS5qMpeu8NhgPbPicHaxNTGJv ozHAF8rly4PceGQRpfnklQhZ/TbbLQBOkFf2neIdLfkXZibDJxN1G9/zolcTf7/kn/Yx CLvw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=COOtc1xr; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id l12-20020a05620a210c00b0076af512f760si6335402qkl.132.2023.09.12.07.06.38 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:06:39 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=COOtc1xr; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg415-000071-W2; Tue, 12 Sep 2023 10:05:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40j-00089U-46 for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:01 -0400 Received: from mail-wr1-x430.google.com ([2a00:1450:4864:20::430]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40T-0003qN-Hn for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:04:53 -0400 Received: by mail-wr1-x430.google.com with SMTP id ffacd0b85a97d-31adc5c899fso6021433f8f.2 for ; Tue, 12 Sep 2023 07:04:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527481; x=1695132281; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=xWeyt5EAvAefsdum7L+OXTH/QqXTlR2KtCZ49pT3dfQ=; b=COOtc1xrzDh9kb9L2kACv89d0fQXeWU3OL0jYvA8B8v9ikB2oNTX7pGkSiBSHjSiKZ pjDup06YdEuzqG48nxpfREGzaEcKBYzRoM6Moatr/FDxiVOdoAZSosjRADCqABCbSzUH Z/33u5vYgurYDPBuv5vRoKTHIDlCUTLC+IRGD2IL54qZz4+8D5VZxqkl2yBFu11Y+HbG xWrPycE45y++TLTMWob73AtfwlPQCzfxokFcQK6nSA3Cd5kB+YK7UWnIiyJUBBUbUNlP rkcuPsvZ53++ieC8l/SG/OPtWPiEXBHeGYgV7BdZfHcrHYZJgC+nIFPpQDF7vkPSzp5c NJGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527481; x=1695132281; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xWeyt5EAvAefsdum7L+OXTH/QqXTlR2KtCZ49pT3dfQ=; b=dX6CJN7kkSpA+8SHu4xCu2X7aPi65ie0CjMf7i9qCKcmsU/GaZkj4JbgLJ7YbJatuO 0lRUJmzbbkxNF6tKccr2cEq/HfcRwhISycNMwdr4gdGm2yGAZIMVxw15UYXSm84LD126 GEEg6MNXu4aoFiDatCZMPXSb7aVEG/py47tgjHMVgz0eUxFaTRpw4A3OWzmGKadwaAOW 4ySqEYgXWsSszKZivQratHvIAN4ejPwhkHL95jSuheGUw+0JitkphhrARYOZEBojAZxi GZDm4HF3IbwkJIx7AbBRA+lkMKv+vvcbrfEUsg3GMbMbfPa9NzvxE0Jmre9HMeuSIdPN JRaA== X-Gm-Message-State: AOJu0YwW4+n4Mvou7bXrV9+8u2A2rpy8BPD428QweFTaLat1O0MhPS5z f8KtJ4aGKNq+niugECLuCBVqzCwnTYpGVWVku8k= X-Received: by 2002:adf:ea85:0:b0:314:3b1f:8ea2 with SMTP id s5-20020adfea85000000b003143b1f8ea2mr10796405wrm.6.1694527480996; Tue, 12 Sep 2023 07:04:40 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:40 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 06/12] target/arm: Implement MTE tag-checking functions for FEAT_MOPS Date: Tue, 12 Sep 2023 15:04:28 +0100 Message-Id: <20230912140434.1333369-7-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::430; envelope-from=peter.maydell@linaro.org; helo=mail-wr1-x430.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org The FEAT_MOPS instructions need a couple of helper routines that check for MTE tag failures: * mte_mops_probe() checks whether there is going to be a tag error in the next up-to-a-page worth of data * mte_check_fail() is an existing function to record the fact of a tag failure, which we need to make global so we can call it from helper-a64.c Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/internals.h | 28 +++++++++++++++++++ target/arm/tcg/mte_helper.c | 54 +++++++++++++++++++++++++++++++++++-- 2 files changed, 80 insertions(+), 2 deletions(-) diff --git a/target/arm/internals.h b/target/arm/internals.h index 5f5393b25c4..a70a7fd50f6 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1272,6 +1272,34 @@ FIELD(MTEDESC, SIZEM1, 12, SIMD_DATA_BITS - 12) /* size - 1 */ bool mte_probe(CPUARMState *env, uint32_t desc, uint64_t ptr); uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra); +/** + * mte_mops_probe: Check where the next MTE failure is for a FEAT_MOPS operation + * @env: CPU env + * @ptr: start address of memory region (dirty pointer) + * @size: length of region (guaranteed not to cross a page boundary) + * @desc: MTEDESC descriptor word (0 means no MTE checks) + * Returns: the size of the region that can be copied without hitting + * an MTE tag failure + * + * Note that we assume that the caller has already checked the TBI + * and TCMA bits with mte_checks_needed() and an MTE check is definitely + * required. + */ +uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size, + uint32_t desc); + +/** + * mte_check_fail: Record an MTE tag check failure + * @env: CPU env + * @desc: MTEDESC descriptor word + * @dirty_ptr: Failing dirty address + * @ra: TCG retaddr + * + * This may never return (if the MTE tag checks are configured to fault). + */ +void mte_check_fail(CPUARMState *env, uint32_t desc, + uint64_t dirty_ptr, uintptr_t ra); + static inline int allocation_tag_from_addr(uint64_t ptr) { return extract64(ptr, 56, 4); diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c index 303bcc7fd84..1cb61cea7af 100644 --- a/target/arm/tcg/mte_helper.c +++ b/target/arm/tcg/mte_helper.c @@ -617,8 +617,8 @@ static void mte_async_check_fail(CPUARMState *env, uint64_t dirty_ptr, } /* Record a tag check failure. */ -static void mte_check_fail(CPUARMState *env, uint32_t desc, - uint64_t dirty_ptr, uintptr_t ra) +void mte_check_fail(CPUARMState *env, uint32_t desc, + uint64_t dirty_ptr, uintptr_t ra) { int mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX); ARMMMUIdx arm_mmu_idx = core_to_aa64_mmu_idx(mmu_idx); @@ -991,3 +991,53 @@ uint64_t HELPER(mte_check_zva)(CPUARMState *env, uint32_t desc, uint64_t ptr) done: return useronly_clean_ptr(ptr); } + +uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size, + uint32_t desc) +{ + int mmu_idx, tag_count; + uint64_t ptr_tag, tag_first, tag_last; + void *mem; + bool w = FIELD_EX32(desc, MTEDESC, WRITE); + uint32_t n; + + mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX); + /* True probe; this will never fault */ + mem = allocation_tag_mem_probe(env, mmu_idx, ptr, + w ? MMU_DATA_STORE : MMU_DATA_LOAD, + size, MMU_DATA_LOAD, true, 0); + if (!mem) { + return size; + } + + /* + * TODO: checkN() is not designed for checks of the size we expect + * for FEAT_MOPS operations, so we should implement this differently. + * Maybe we should do something like + * if (region start and size are aligned nicely) { + * do direct loads of 64 tag bits at a time; + * } else { + * call checkN() + * } + */ + /* Round the bounds to the tag granule, and compute the number of tags. */ + ptr_tag = allocation_tag_from_addr(ptr); + tag_first = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE); + tag_last = QEMU_ALIGN_DOWN(ptr + size - 1, TAG_GRANULE); + tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1; + n = checkN(mem, ptr & TAG_GRANULE, ptr_tag, tag_count); + if (likely(n == tag_count)) { + return size; + } + + /* + * Failure; for the first granule, it's at @ptr. Otherwise + * it's at the first byte of the nth granule. Calculate how + * many bytes we can access without hitting that failure. + */ + if (n == 0) { + return 0; + } else { + return n * TAG_GRANULE - (ptr - tag_first); + } +} From patchwork Tue Sep 12 14:04:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721777 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1666236wrp; Tue, 12 Sep 2023 07:08:36 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEvF91JBt12xgyg/Z7zNyFjwvCRp0bedsxrC8Kia47HguJFeIGewk9bVuRtFdTbR5c6uU8f X-Received: by 2002:a05:6358:925:b0:13c:eef8:dec0 with SMTP id r37-20020a056358092500b0013ceef8dec0mr14582811rwi.3.1694527716638; Tue, 12 Sep 2023 07:08:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527716; cv=none; d=google.com; s=arc-20160816; b=Py7Zdcf/hKNBTvgKFElT3RgP6CJYLxVxrbVYARic9ykyBjkwEZ2gRk66hvyLkqpo6C w+KTxy/A25XynN6ugUE/htg/fbkw1Y/1kgcUuDD619lTkKqMnEau4irgYTDAVtHZnncE j7kiVpAJa091kJlM92/qGPuxLZTk/TS9PXNE0F/PcCjBEf6sGA4Bz5ww5Ksu56oBY2GO F/k9nymQVtp0fObTOZr42mUolAGynF6YVGu8S801uRz01LXPAtGRes53hrrlhAQ83qvz gJroZw0LmvGziZo8l1+NkmbbOBBvD/SMCZPCx0roNy68POis2aJw6n61QuqpS0gKFKgI sy4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=wlUwz3Z//tFBD6vOX5Y7annsFJ4rrWopqGexNOG+/ec=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=kpcgUwx4zg9cJp1U24NDJlVaUFKITJ43L+xAQIqwxwLoIHW4TAFcf3jRibUfYBPHgR 0bv2K+G7kLi4TaHaYYmqJxYfxl/QQKCxF9Sr/VjhRS1pwNairCnTrN+TyMkwfapWpYpa 2F+08Y9PPkELOzC2avqk3BwnCTKBuUjxOUYyE5G++CGoFp+/2fZOV6wr23eQ7vFnQ1j3 K8ZGo7aQfunr7Kgt7KNUsFYwxoiI5CCL5gt9g4vmq5txFTg1ogloOLUhE33qHwbnmlzg 4GmbldbERpyyrn7Z5aYOpqhRZbG03qhEgol1BAbDMqtsZ0N1YblGI7LwTd2YA+414ck2 2GAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hJQIfjx7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j11-20020a0ce00b000000b0064c1d09b330si6049124qvk.65.2023.09.12.07.08.36 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:08:36 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=hJQIfjx7; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg411-0008PT-FQ; Tue, 12 Sep 2023 10:05:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40x-0008Jp-MD for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:15 -0400 Received: from mail-wr1-x431.google.com ([2a00:1450:4864:20::431]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40p-0003ql-Ff for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:12 -0400 Received: by mail-wr1-x431.google.com with SMTP id ffacd0b85a97d-31c3726cc45so5887432f8f.0 for ; Tue, 12 Sep 2023 07:04:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527483; x=1695132283; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=wlUwz3Z//tFBD6vOX5Y7annsFJ4rrWopqGexNOG+/ec=; b=hJQIfjx7keQZgF1EAS5Le/yIPuah6CEDjQiFZ1ymg58Pajvo+d49RkhLt4GvRuFh4q wgQuDRYiW+qdur6nWvDBpjOvfKL5624+/+5lcG+wK7Ts5hSQTFigJkJN1bGk+X35K91S KVAbDd6zUp7Jpk9tw3Ll1uKb9gqZK016LeQFiIotSsukcjpOVZiMSmQU0aiNRIhnWZVU 7m+IhRgnHsujuNMTVcgaJ3YN6d0f0kGzbCL4A5S4wkVdR55vOTF6cBWgcwXkM7Drnaqf gP6Y/fuPqJw8qfWtMSfDQ5AC1MpxfeSDk7ZUZXUW7CzW9nv3Xr9Z1rFm+e79uhU2Nnhj vQgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527483; x=1695132283; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wlUwz3Z//tFBD6vOX5Y7annsFJ4rrWopqGexNOG+/ec=; b=g3Z/v628OxgPxmHhwScWBxhRYouViTrXKd9KEflUUdpFpcBdQiiwpum8TZIoIFteWH ojAEw/ksf3f6th7Tz7VBeyxK0gMHx8Xn2tnMJT9LR8u6248aITuEqylfsac9u3cRRjJr crRPUxVDBjkmGOXHmymAkdpR6tfXi6dOXji4GsOPhiNMZFiTHK92+D1tt4e/tD+RUL07 SSIaLM+t2yMX2gLRwr9zzGnsNg7KuXt+0qdp/y3yi6evehRJUKKTFq8RTaGkj1YdvHA4 SjKq0Cy6jCbzFYBuJDHJ9os7bOy9SKFoRConY/EwqBYG1IJhnIESrAei8w35MOEBRjPB SnOg== X-Gm-Message-State: AOJu0Yw2tPtYaqfOzXce6reRZz1tfwPioC5p9hH8g/KI1PTAo5kAiazQ K84x08ohArB5G1i8v6VdgmH1kQ== X-Received: by 2002:a05:6000:a17:b0:31f:9398:3656 with SMTP id co23-20020a0560000a1700b0031f93983656mr7985602wrb.34.1694527481509; Tue, 12 Sep 2023 07:04:41 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:41 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 07/12] target/arm: Implement the SET* instructions Date: Tue, 12 Sep 2023 15:04:29 +0100 Message-Id: <20230912140434.1333369-8-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::431; envelope-from=peter.maydell@linaro.org; helo=mail-wr1-x431.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Implement the SET* instructions which collectively implement a "memset" operation. These come in a set of three, eg SETP (prologue), SETM (main), SETE (epilogue), and each of those has different flavours to indicate whether memory accesses should be unpriv or non-temporal. This commit does not include the "memset with tag setting" SETG* instructions. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- v2: separate do_setp/setm/sete, so we can have separate helpers for SETG that pass in a stepfn and bool is_setg, rather than one helper that looks in syndrome to decide whether it's set or setg --- target/arm/tcg/helper-a64.h | 4 + target/arm/tcg/a64.decode | 16 ++ target/arm/tcg/helper-a64.c | 344 +++++++++++++++++++++++++++++++++ target/arm/tcg/translate-a64.c | 49 +++++ 4 files changed, 413 insertions(+) diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h index 57cfd68569e..7ce5d2105ad 100644 --- a/target/arm/tcg/helper-a64.h +++ b/target/arm/tcg/helper-a64.h @@ -117,3 +117,7 @@ DEF_HELPER_FLAGS_3(stzgm_tags, TCG_CALL_NO_WG, void, env, i64, i64) DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG, noreturn, env, i64, i32, i32) + +DEF_HELPER_3(setp, void, env, i32, i32) +DEF_HELPER_3(setm, void, env, i32, i32) +DEF_HELPER_3(sete, void, env, i32, i32) diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode index 71113173020..c2a97328eeb 100644 --- a/target/arm/tcg/a64.decode +++ b/target/arm/tcg/a64.decode @@ -554,3 +554,19 @@ LDGM 11011001 11 1 ......... 00 ..... ..... @ldst_tag_mult p=0 w=0 STZ2G 11011001 11 1 ......... 01 ..... ..... @ldst_tag p=1 w=1 STZ2G 11011001 11 1 ......... 10 ..... ..... @ldst_tag p=0 w=0 STZ2G 11011001 11 1 ......... 11 ..... ..... @ldst_tag p=0 w=1 + +# Memory operations (memset, memcpy, memmove) +# Each of these comes in a set of three, eg SETP (prologue), SETM (main), +# SETE (epilogue), and each of those has different flavours to +# indicate whether memory accesses should be unpriv or non-temporal. +# We don't distinguish temporal and non-temporal accesses, but we +# do need to report it in syndrome register values. + +# Memset +&set rs rn rd unpriv nontemp +# op2 bit 1 is nontemporal bit +@set .. ......... rs:5 .. nontemp:1 unpriv:1 .. rn:5 rd:5 &set + +SETP 00 011001110 ..... 00 . . 01 ..... ..... @set +SETM 00 011001110 ..... 01 . . 01 ..... ..... @set +SETE 00 011001110 ..... 10 . . 01 ..... ..... @set diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c index 0cf56f6dc44..24ae5ecf32e 100644 --- a/target/arm/tcg/helper-a64.c +++ b/target/arm/tcg/helper-a64.c @@ -968,3 +968,347 @@ void HELPER(unaligned_access)(CPUARMState *env, uint64_t addr, arm_cpu_do_unaligned_access(env_cpu(env), addr, access_type, mmu_idx, GETPC()); } + +/* Memory operations (memset, memmove, memcpy) */ + +/* + * Return true if the CPY* and SET* insns can execute; compare + * pseudocode CheckMOPSEnabled(), though we refactor it a little. + */ +static bool mops_enabled(CPUARMState *env) +{ + int el = arm_current_el(env); + + if (el < 2 && + (arm_hcr_el2_eff(env) & (HCR_E2H | HCR_TGE)) != (HCR_E2H | HCR_TGE) && + !(arm_hcrx_el2_eff(env) & HCRX_MSCEN)) { + return false; + } + + if (el == 0) { + if (!el_is_in_host(env, 0)) { + return env->cp15.sctlr_el[1] & SCTLR_MSCEN; + } else { + return env->cp15.sctlr_el[2] & SCTLR_MSCEN; + } + } + return true; +} + +static void check_mops_enabled(CPUARMState *env, uintptr_t ra) +{ + if (!mops_enabled(env)) { + raise_exception_ra(env, EXCP_UDEF, syn_uncategorized(), + exception_target_el(env), ra); + } +} + +/* + * Return the target exception level for an exception due + * to mismatched arguments in a FEAT_MOPS copy or set. + * Compare pseudocode MismatchedCpySetTargetEL() + */ +static int mops_mismatch_exception_target_el(CPUARMState *env) +{ + int el = arm_current_el(env); + + if (el > 1) { + return el; + } + if (el == 0 && (arm_hcr_el2_eff(env) & HCR_TGE)) { + return 2; + } + if (el == 1 && (arm_hcrx_el2_eff(env) & HCRX_MCE2)) { + return 2; + } + return 1; +} + +/* + * Check whether an M or E instruction was executed with a CF value + * indicating the wrong option for this implementation. + * Assumes we are always Option A. + */ +static void check_mops_wrong_option(CPUARMState *env, uint32_t syndrome, + uintptr_t ra) +{ + if (env->CF != 0) { + syndrome |= 1 << 17; /* Set the wrong-option bit */ + raise_exception_ra(env, EXCP_UDEF, syndrome, + mops_mismatch_exception_target_el(env), ra); + } +} + +/* + * Return the maximum number of bytes we can transfer starting at addr + * without crossing a page boundary. + */ +static uint64_t page_limit(uint64_t addr) +{ + return TARGET_PAGE_ALIGN(addr + 1) - addr; +} + +/* + * Perform part of a memory set on an area of guest memory starting at + * toaddr (a dirty address) and extending for setsize bytes. + * + * Returns the number of bytes actually set, which might be less than + * setsize; the caller should loop until the whole set has been done. + * The caller should ensure that the guest registers are correct + * for the possibility that the first byte of the set encounters + * an exception or watchpoint. We guarantee not to take any faults + * for bytes other than the first. + */ +static uint64_t set_step(CPUARMState *env, uint64_t toaddr, + uint64_t setsize, uint32_t data, int memidx, + uint32_t *mtedesc, uintptr_t ra) +{ + void *mem; + + setsize = MIN(setsize, page_limit(toaddr)); + if (*mtedesc) { + uint64_t mtesize = mte_mops_probe(env, toaddr, setsize, *mtedesc); + if (mtesize == 0) { + /* Trap, or not. All CPU state is up to date */ + mte_check_fail(env, *mtedesc, toaddr, ra); + /* Continue, with no further MTE checks required */ + *mtedesc = 0; + } else { + /* Advance to the end, or to the tag mismatch */ + setsize = MIN(setsize, mtesize); + } + } + + toaddr = useronly_clean_ptr(toaddr); + /* + * Trapless lookup: returns NULL for invalid page, I/O, + * watchpoints, clean pages, etc. + */ + mem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, memidx); + +#ifndef CONFIG_USER_ONLY + if (unlikely(!mem)) { + /* + * Slow-path: just do one byte write. This will handle the + * watchpoint, invalid page, etc handling correctly. + * For clean code pages, the next iteration will see + * the page dirty and will use the fast path. + */ + cpu_stb_mmuidx_ra(env, toaddr, data, memidx, ra); + return 1; + } +#endif + /* Easy case: just memset the host memory */ + memset(mem, data, setsize); + return setsize; +} + +typedef uint64_t StepFn(CPUARMState *env, uint64_t toaddr, + uint64_t setsize, uint32_t data, + int memidx, uint32_t *mtedesc, uintptr_t ra); + +/* Extract register numbers from a MOPS exception syndrome value */ +static int mops_destreg(uint32_t syndrome) +{ + return extract32(syndrome, 10, 5); +} + +static int mops_srcreg(uint32_t syndrome) +{ + return extract32(syndrome, 5, 5); +} + +static int mops_sizereg(uint32_t syndrome) +{ + return extract32(syndrome, 0, 5); +} + +/* + * Return true if TCMA and TBI bits mean we need to do MTE checks. + * We only need to do this once per MOPS insn, not for every page. + */ +static bool mte_checks_needed(uint64_t ptr, uint32_t desc) +{ + int bit55 = extract64(ptr, 55, 1); + + /* + * Note that tbi_check() returns true for "access checked" but + * tcma_check() returns true for "access unchecked". + */ + if (!tbi_check(desc, bit55)) { + return false; + } + return !tcma_check(desc, bit55, allocation_tag_from_addr(ptr)); +} + +/* + * For the Memory Set operation, our implementation chooses + * always to use "option A", where we update Xd to the final + * address in the SETP insn, and set Xn to be -(bytes remaining). + * On SETM and SETE insns we only need update Xn. + * + * @env: CPU + * @syndrome: syndrome value for mismatch exceptions + * (also contains the register numbers we need to use) + * @mtedesc: MTE descriptor word + * @stepfn: function which does a single part of the set operation + * @is_setg: true if this is the tag-setting SETG variant + */ +static void do_setp(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc, + StepFn *stepfn, bool is_setg, uintptr_t ra) +{ + /* Prologue: we choose to do up to the next page boundary */ + int rd = mops_destreg(syndrome); + int rs = mops_srcreg(syndrome); + int rn = mops_sizereg(syndrome); + uint8_t data = env->xregs[rs]; + uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX); + uint64_t toaddr = env->xregs[rd]; + uint64_t setsize = env->xregs[rn]; + uint64_t stagesetsize, step; + + check_mops_enabled(env, ra); + + if (setsize > INT64_MAX) { + setsize = INT64_MAX; + } + + if (!mte_checks_needed(toaddr, mtedesc)) { + mtedesc = 0; + } + + stagesetsize = MIN(setsize, page_limit(toaddr)); + while (stagesetsize) { + env->xregs[rd] = toaddr; + env->xregs[rn] = setsize; + step = stepfn(env, toaddr, stagesetsize, data, memidx, &mtedesc, ra); + toaddr += step; + setsize -= step; + stagesetsize -= step; + } + /* Insn completed, so update registers to the Option A format */ + env->xregs[rd] = toaddr + setsize; + env->xregs[rn] = -setsize; + + /* Set NZCV = 0000 to indicate we are an Option A implementation */ + env->NF = 0; + env->ZF = 1; /* our env->ZF encoding is inverted */ + env->CF = 0; + env->VF = 0; + return; +} + +void HELPER(setp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) +{ + do_setp(env, syndrome, mtedesc, set_step, false, GETPC()); +} + +static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc, + StepFn *stepfn, bool is_setg, uintptr_t ra) +{ + /* Main: we choose to do all the full-page chunks */ + CPUState *cs = env_cpu(env); + int rd = mops_destreg(syndrome); + int rs = mops_srcreg(syndrome); + int rn = mops_sizereg(syndrome); + uint8_t data = env->xregs[rs]; + uint64_t toaddr = env->xregs[rd] + env->xregs[rn]; + uint64_t setsize = -env->xregs[rn]; + uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX); + uint64_t step, stagesetsize; + + check_mops_enabled(env, ra); + + /* + * We're allowed to NOP out "no data to copy" before the consistency + * checks; we choose to do so. + */ + if (env->xregs[rn] == 0) { + return; + } + + check_mops_wrong_option(env, syndrome, ra); + + /* + * Our implementation will work fine even if we have an unaligned + * destination address, and because we update Xn every time around + * the loop below and the return value from stepfn() may be less + * than requested, we might find toaddr is unaligned. So we don't + * have an IMPDEF check for alignment here. + */ + + if (!mte_checks_needed(toaddr, mtedesc)) { + mtedesc = 0; + } + + /* Do the actual memset: we leave the last partial page to SETE */ + stagesetsize = setsize & TARGET_PAGE_MASK; + while (stagesetsize > 0) { + step = stepfn(env, toaddr, setsize, data, memidx, &mtedesc, ra); + toaddr += step; + setsize -= step; + stagesetsize -= step; + env->xregs[rn] = -setsize; + if (stagesetsize > 0 && unlikely(cpu_loop_exit_requested(cs))) { + cpu_loop_exit_restore(cs, ra); + } + } +} + +void HELPER(setm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) +{ + do_setm(env, syndrome, mtedesc, set_step, false, GETPC()); +} + +static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc, + StepFn *stepfn, bool is_setg, uintptr_t ra) +{ + /* Epilogue: do the last partial page */ + int rd = mops_destreg(syndrome); + int rs = mops_srcreg(syndrome); + int rn = mops_sizereg(syndrome); + uint8_t data = env->xregs[rs]; + uint64_t toaddr = env->xregs[rd] + env->xregs[rn]; + uint64_t setsize = -env->xregs[rn]; + uint32_t memidx = FIELD_EX32(mtedesc, MTEDESC, MIDX); + uint64_t step; + + check_mops_enabled(env, ra); + + /* + * We're allowed to NOP out "no data to copy" before the consistency + * checks; we choose to do so. + */ + if (setsize == 0) { + return; + } + + check_mops_wrong_option(env, syndrome, ra); + + /* + * Our implementation has no address alignment requirements, but + * we do want to enforce the "less than a page" size requirement, + * so we don't need to have the "check for interrupts" here. + */ + if (setsize >= TARGET_PAGE_SIZE) { + raise_exception_ra(env, EXCP_UDEF, syndrome, + mops_mismatch_exception_target_el(env), ra); + } + + if (!mte_checks_needed(toaddr, mtedesc)) { + mtedesc = 0; + } + + /* Do the actual memset */ + while (setsize > 0) { + step = stepfn(env, toaddr, setsize, data, memidx, &mtedesc, ra); + toaddr += step; + setsize -= step; + env->xregs[rn] = -setsize; + } +} + +void HELPER(sete)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) +{ + do_sete(env, syndrome, mtedesc, set_step, false, GETPC()); +} diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c index 24afd929144..bb7b15cb6cb 100644 --- a/target/arm/tcg/translate-a64.c +++ b/target/arm/tcg/translate-a64.c @@ -3962,6 +3962,55 @@ TRANS_FEAT(STZG, aa64_mte_insn_reg, do_STG, a, true, false) TRANS_FEAT(ST2G, aa64_mte_insn_reg, do_STG, a, false, true) TRANS_FEAT(STZ2G, aa64_mte_insn_reg, do_STG, a, true, true) +typedef void SetFn(TCGv_env, TCGv_i32, TCGv_i32); + +static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn) +{ + int memidx; + uint32_t syndrome, desc = 0; + + /* + * UNPREDICTABLE cases: we choose to UNDEF, which allows + * us to pull this check before the CheckMOPSEnabled() test + * (which we do in the helper function) + */ + if (a->rs == a->rn || a->rs == a->rd || a->rn == a->rd || + a->rd == 31 || a->rn == 31) { + return false; + } + + memidx = get_a64_user_mem_index(s, a->unpriv); + + /* + * We pass option_a == true, matching our implementation; + * we pass wrong_option == false: helper function may set that bit. + */ + syndrome = syn_mop(true, false, (a->nontemp << 1) | a->unpriv, + is_epilogue, false, true, a->rd, a->rs, a->rn); + + if (s->mte_active[a->unpriv]) { + /* We may need to do MTE tag checking, so assemble the descriptor */ + desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid); + desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); + desc = FIELD_DP32(desc, MTEDESC, WRITE, true); + /* SIZEM1 and ALIGN we leave 0 (byte write) */ + } + /* The helper function always needs the memidx even with MTE disabled */ + desc = FIELD_DP32(desc, MTEDESC, MIDX, memidx); + + /* + * The helper needs the register numbers, but since they're in + * the syndrome anyway, we let it extract them from there rather + * than passing in an extra three integer arguments. + */ + fn(cpu_env, tcg_constant_i32(syndrome), tcg_constant_i32(desc)); + return true; +} + +TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, gen_helper_setp) +TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, gen_helper_setm) +TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, gen_helper_sete) + typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64); static bool gen_rri(DisasContext *s, arg_rri_sf *a, From patchwork Tue Sep 12 14:04:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721778 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1666300wrp; Tue, 12 Sep 2023 07:08:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHAF1M23sjtAUtSWkSqX1kh1P5Qb3c7bHN9S2LndeiH2zAPlnI+ga5Q9MSTuceGoWliXL7z X-Received: by 2002:ac8:7e8c:0:b0:411:fa19:bd63 with SMTP id w12-20020ac87e8c000000b00411fa19bd63mr18546767qtj.31.1694527720818; Tue, 12 Sep 2023 07:08:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527720; cv=none; d=google.com; s=arc-20160816; b=0ugFwap3uGOXwK73YrsnZcQFwzX7WlvN2/nVqOw2FwoxhBdN8H4Md3qRNnO+RuzcWX HguqHx4I0qdFV4SuJ8M+gPAn9RRHn+FIT+cjp1Cx9gMtFz1+oftMLEQm7PE3XwB3ApzE QSnz3YYAK2PA9h+F8Xw+pkmIvj+tR490IhG0SUVB8N46OeboFDM9bv2WgtX8lo9K3A63 kpMFd4Nt9fhAjqmpSmmfcjm7AhoobyhD/Jlqnm/1Zl4+Uk+velsA6T4fOm+0FEKdRZPc IKUiDExw5d9MtpqgpwBbWx3SWx+eof4UlJDXn5RDxa9uyJBxZi2NJVENa/fYFVcsdUBZ Z0MQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=LKEgjrZx4gf0ML/IevGyBJ31OzflOFa4C2rhH8LLKYk=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=gzCWAqSSBPqqKGfI7c3BZKQMz/lqWFIM3lOVdYu4qb9o+x2Ok8L9VURpoUCcRpKTBf hLytBQblszry9jE6ldkgvQahBz8p9woq0xXYsLNy88888wF/VCYZKQU3ErvZcqCo+uHE yG24DrNB/VPl/IeSEcTrOX12IO0qTcaDKkNqESdF/avRa376i4aqrsA42T2f+8K1Sox2 L+EMstlTKNPSRdsO9C++2kQULFN/SroNRlJp6KEW6Zlax8ncp+dZi6wb+SIjITgJai3F bl1enKPwDTPrPZHEk1sY4/jZCRWm+nW4dciAfumVHN2Z8H1lcsrwx4OfI5go2BxYzxPA Stsg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OwdvjT+a; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q22-20020a05622a031600b0040afc6ae877si6416221qtw.640.2023.09.12.07.08.40 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:08:40 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=OwdvjT+a; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg411-0008PA-1S; Tue, 12 Sep 2023 10:05:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40j-0008C8-Pp for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:02 -0400 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40b-0003qw-Be for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:04:59 -0400 Received: by mail-wr1-x42d.google.com with SMTP id ffacd0b85a97d-31c5a2e8501so5609396f8f.0 for ; Tue, 12 Sep 2023 07:04:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527484; x=1695132284; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=LKEgjrZx4gf0ML/IevGyBJ31OzflOFa4C2rhH8LLKYk=; b=OwdvjT+aSGmZheKGLc5vGlzWvDfafm0unehF/pITTbJiYWZ3zaVJKiw7PoUM3NQf8I g/jrG/EQiseZGE45PmVhId0ai5keF/QFsF001h2fb95KflpLvxh4a+TDjhT91TvZjgs0 5rrqHf0raxsalGVuLo6ab76386yYGpAXV+XWqWgmz6dlXqcOvs/3pPgerzWfGW8MwL+5 UxoPvtgehdSY66OsjBGmTvd0TPvwC6h8qLPCnLNjYZ93yI5DbzJ+u2liLYsPVgf2lpNI A2rAFplKV9dmCqu0bsW2/oR85uPEuWjQGaLu0ONwl4Q8tmrzaJwsajBh/yOMF3B+0qy3 ysSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527484; x=1695132284; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LKEgjrZx4gf0ML/IevGyBJ31OzflOFa4C2rhH8LLKYk=; b=qODx0lOPrIsVazDbZKYcbBlfzeseHzVyaQ8sL7wxRG5GzxkkqUN3wz1HdGOOt96c6S 1D2Q27voCWn45QCohlwSozK/NJlx5XLbb/86HzWJr48E2CUC5NDIvvXMNZ+tXa4uabrw ufNxGFEVU9qLjdNdyy7VnykbHhAHlChQafeBEVXS2qvb9Xg+X2RLoVjBRP2wdnbFF8lE UkoDfYjSG5Am3UtwWcVhcDp/0bHYBrPAxl9m9xoKGE7wTR1TMuwfXTBDNUOfgyIsY8XX LEABW78qqDtrTHY9K/CtJ/yMij2YXoh4Im52Pw8EA9jPu9GGeSQKeKfFU/BiA6Ou9Wvo RApw== X-Gm-Message-State: AOJu0YzECVk14chZr8dmheK75I07Q7QLx9ITGDTka0KqhdRzcu6VaHNW uALDxYM5eger6dmQrU2bBYfUWw== X-Received: by 2002:adf:ec4c:0:b0:317:7068:4997 with SMTP id w12-20020adfec4c000000b0031770684997mr10109739wrn.60.1694527483474; Tue, 12 Sep 2023 07:04:43 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:43 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 08/12] target/arm: Define new TB flag for ATA0 Date: Tue, 12 Sep 2023 15:04:30 +0100 Message-Id: <20230912140434.1333369-9-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42d; envelope-from=peter.maydell@linaro.org; helo=mail-wr1-x42d.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Currently the only tag-setting instructions always do so in the context of the current EL, and so we only need one ATA bit in the TB flags. The FEAT_MOPS SETG instructions include ones which set tags for a non-privileged access, so we now also need the equivalent "are tags enabled?" information for EL0. Add the new TB flag, and convert the existing 'bool ata' field in DisasContext to a 'bool ata[2]' that can be indexed by the is_unpriv bit in an instruction, similarly to mte[2]. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/cpu.h | 1 + target/arm/tcg/translate.h | 4 ++-- target/arm/tcg/hflags.c | 12 ++++++++++++ target/arm/tcg/translate-a64.c | 23 ++++++++++++----------- 4 files changed, 27 insertions(+), 13 deletions(-) diff --git a/target/arm/cpu.h b/target/arm/cpu.h index 266c1a9ea1b..bd55c5dabfd 100644 --- a/target/arm/cpu.h +++ b/target/arm/cpu.h @@ -3171,6 +3171,7 @@ FIELD(TBFLAG_A64, SVL, 24, 4) FIELD(TBFLAG_A64, SME_TRAP_NONSTREAMING, 28, 1) FIELD(TBFLAG_A64, FGT_ERET, 29, 1) FIELD(TBFLAG_A64, NAA, 30, 1) +FIELD(TBFLAG_A64, ATA0, 31, 1) /* * Helpers for using the above. diff --git a/target/arm/tcg/translate.h b/target/arm/tcg/translate.h index f748ba6f394..63922f8bad1 100644 --- a/target/arm/tcg/translate.h +++ b/target/arm/tcg/translate.h @@ -114,8 +114,8 @@ typedef struct DisasContext { bool unpriv; /* True if v8.3-PAuth is active. */ bool pauth_active; - /* True if v8.5-MTE access to tags is enabled. */ - bool ata; + /* True if v8.5-MTE access to tags is enabled; index with is_unpriv. */ + bool ata[2]; /* True if v8.5-MTE tag checks affect the PE; index with is_unpriv. */ bool mte_active[2]; /* True with v8.5-BTI and SCTLR_ELx.BT* set. */ diff --git a/target/arm/tcg/hflags.c b/target/arm/tcg/hflags.c index ea642384f5a..cea1adb7b62 100644 --- a/target/arm/tcg/hflags.c +++ b/target/arm/tcg/hflags.c @@ -325,6 +325,18 @@ static CPUARMTBFlags rebuild_hflags_a64(CPUARMState *env, int el, int fp_el, && allocation_tag_access_enabled(env, 0, sctlr)) { DP_TBFLAG_A64(flags, MTE0_ACTIVE, 1); } + /* + * For unpriv tag-setting accesses we alse need ATA0. Again, in + * contexts where unpriv and normal insns are the same we + * duplicate the ATA bit to save effort for translate-a64.c. + */ + if (EX_TBFLAG_A64(flags, UNPRIV)) { + if (allocation_tag_access_enabled(env, 0, sctlr)) { + DP_TBFLAG_A64(flags, ATA0, 1); + } + } else { + DP_TBFLAG_A64(flags, ATA0, EX_TBFLAG_A64(flags, ATA)); + } /* Cache TCMA as well as TBI. */ DP_TBFLAG_A64(flags, TCMA, aa64_va_parameter_tcma(tcr, mmu_idx)); } diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c index bb7b15cb6cb..da4aabbaf4e 100644 --- a/target/arm/tcg/translate-a64.c +++ b/target/arm/tcg/translate-a64.c @@ -2272,7 +2272,7 @@ static void handle_sys(DisasContext *s, bool isread, clean_addr = clean_data_tbi(s, tcg_rt); gen_probe_access(s, clean_addr, MMU_DATA_STORE, MO_8); - if (s->ata) { + if (s->ata[0]) { /* Extract the tag from the register to match STZGM. */ tag = tcg_temp_new_i64(); tcg_gen_shri_i64(tag, tcg_rt, 56); @@ -2289,7 +2289,7 @@ static void handle_sys(DisasContext *s, bool isread, clean_addr = clean_data_tbi(s, tcg_rt); gen_helper_dc_zva(cpu_env, clean_addr); - if (s->ata) { + if (s->ata[0]) { /* Extract the tag from the register to match STZGM. */ tag = tcg_temp_new_i64(); tcg_gen_shri_i64(tag, tcg_rt, 56); @@ -3070,7 +3070,7 @@ static bool trans_STGP(DisasContext *s, arg_ldstpair *a) tcg_gen_qemu_st_i128(tmp, clean_addr, get_mem_index(s), mop); /* Perform the tag store, if tag access enabled. */ - if (s->ata) { + if (s->ata[0]) { if (tb_cflags(s->base.tb) & CF_PARALLEL) { gen_helper_stg_parallel(cpu_env, dirty_addr, dirty_addr); } else { @@ -3768,7 +3768,7 @@ static bool trans_STZGM(DisasContext *s, arg_ldst_tag *a) tcg_gen_addi_i64(addr, addr, a->imm); tcg_rt = cpu_reg(s, a->rt); - if (s->ata) { + if (s->ata[0]) { gen_helper_stzgm_tags(cpu_env, addr, tcg_rt); } /* @@ -3800,7 +3800,7 @@ static bool trans_STGM(DisasContext *s, arg_ldst_tag *a) tcg_gen_addi_i64(addr, addr, a->imm); tcg_rt = cpu_reg(s, a->rt); - if (s->ata) { + if (s->ata[0]) { gen_helper_stgm(cpu_env, addr, tcg_rt); } else { MMUAccessType acc = MMU_DATA_STORE; @@ -3832,7 +3832,7 @@ static bool trans_LDGM(DisasContext *s, arg_ldst_tag *a) tcg_gen_addi_i64(addr, addr, a->imm); tcg_rt = cpu_reg(s, a->rt); - if (s->ata) { + if (s->ata[0]) { gen_helper_ldgm(tcg_rt, cpu_env, addr); } else { MMUAccessType acc = MMU_DATA_LOAD; @@ -3867,7 +3867,7 @@ static bool trans_LDG(DisasContext *s, arg_ldst_tag *a) tcg_gen_andi_i64(addr, addr, -TAG_GRANULE); tcg_rt = cpu_reg(s, a->rt); - if (s->ata) { + if (s->ata[0]) { gen_helper_ldg(tcg_rt, cpu_env, addr, tcg_rt); } else { /* @@ -3904,7 +3904,7 @@ static bool do_STG(DisasContext *s, arg_ldst_tag *a, bool is_zero, bool is_pair) tcg_gen_addi_i64(addr, addr, a->imm); } tcg_rt = cpu_reg_sp(s, a->rt); - if (!s->ata) { + if (!s->ata[0]) { /* * For STG and ST2G, we need to check alignment and probe memory. * TODO: For STZG and STZ2G, we could rely on the stores below, @@ -4073,7 +4073,7 @@ static bool gen_add_sub_imm_with_tags(DisasContext *s, arg_rri_tag *a, tcg_rn = cpu_reg_sp(s, a->rn); tcg_rd = cpu_reg_sp(s, a->rd); - if (s->ata) { + if (s->ata[0]) { gen_helper_addsubg(tcg_rd, cpu_env, tcg_rn, tcg_constant_i32(imm), tcg_constant_i32(a->uimm4)); @@ -5460,7 +5460,7 @@ static void disas_data_proc_2src(DisasContext *s, uint32_t insn) if (sf == 0 || !dc_isar_feature(aa64_mte_insn_reg, s)) { goto do_unallocated; } - if (s->ata) { + if (s->ata[0]) { gen_helper_irg(cpu_reg_sp(s, rd), cpu_env, cpu_reg_sp(s, rn), cpu_reg(s, rm)); } else { @@ -13951,7 +13951,8 @@ static void aarch64_tr_init_disas_context(DisasContextBase *dcbase, dc->bt = EX_TBFLAG_A64(tb_flags, BT); dc->btype = EX_TBFLAG_A64(tb_flags, BTYPE); dc->unpriv = EX_TBFLAG_A64(tb_flags, UNPRIV); - dc->ata = EX_TBFLAG_A64(tb_flags, ATA); + dc->ata[0] = EX_TBFLAG_A64(tb_flags, ATA); + dc->ata[1] = EX_TBFLAG_A64(tb_flags, ATA0); dc->mte_active[0] = EX_TBFLAG_A64(tb_flags, MTE_ACTIVE); dc->mte_active[1] = EX_TBFLAG_A64(tb_flags, MTE0_ACTIVE); dc->pstate_sm = EX_TBFLAG_A64(tb_flags, PSTATE_SM); From patchwork Tue Sep 12 14:04:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721776 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1666157wrp; Tue, 12 Sep 2023 07:08:33 -0700 (PDT) X-Google-Smtp-Source: AGHT+IG7FHl052gkmmP36Z0E155/L5hspRFDjJ38ouV4ccMhxUWvWdKAk33h2H878uVR8IsLdbdp X-Received: by 2002:a05:620a:a03:b0:770:67e3:1fee with SMTP id i3-20020a05620a0a0300b0077067e31feemr11381430qka.65.1694527710745; Tue, 12 Sep 2023 07:08:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527710; cv=none; d=google.com; s=arc-20160816; b=KN1whVWgQQv1RJMlVEpvKbI3GDjmcDmgCwjnmBQWRQfBtGIrq52eGH9QeqJEhDFzdY yZx6HnXM8t1fCMLydk+EtmxN4aanTmGyhdJY1L+pTnjNl+JbvlsfmcK/VsiYtuGSG0NR bkDubTC38BtzxZvZPA6pBOGW4bW5fXloAKyEHX0Nv6SdQPhx0tbXRvLRp3q2AtXEBMdf mLWf05wtw3Xml9IDDHkhrbYA1fG53lNYAGejhwGZmvmzGwa51VIJO4a8dr3WCLQuRt8e WIL4RMEWAt/cgpaaU3w37cSy7rqC47rvv565w6MC4toXzwSwJ8MvH88mPKgu5CP1s0sk KPKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=oMzPBnHkJEk4I1sTfs7iYXVAv7BO9UDWeNUPlFOQw40=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=aO/DaCRmr5R+qMg6NpsVG0p54zbrCpDPL5h8gfIYxIJS5SUg0kvQ14X6yVQqB72f/v 7TWbLpHAPtITblwY7Yj9OrPDf5KpRcl/Ovi+JxERkNWuzOl8Z5OzjkRTarrfhIP2jvqE NVm4qvfXelMODAKrdrsE+uppPbsEivtV+d0eJcu60jlnhkEkhEMdNd1ofHL0oe7IQDCt O7XBgBjB2J7Puom7aP0fn1DLeSuAHJ+ePc+Bwn9VgfnuXbpIZHGNxWHzqmn/XN998gXy Nsu9425waP8kvstIf8sj0RlzOeY5TLftptOeCIkoRi9ht+T6E10FdupRCPOEmf9jTudx vDDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pwc9mIsT; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id a23-20020a05620a439700b0076c8f7e327bsi6437940qkp.775.2023.09.12.07.08.30 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:08:30 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=pwc9mIsT; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg413-0008Ss-9W; Tue, 12 Sep 2023 10:05:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40r-0008Gc-2x for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:10 -0400 Received: from mail-wr1-x42e.google.com ([2a00:1450:4864:20::42e]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40l-0003rM-9z for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:08 -0400 Received: by mail-wr1-x42e.google.com with SMTP id ffacd0b85a97d-31c5a2e8501so5609459f8f.0 for ; Tue, 12 Sep 2023 07:04:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527486; x=1695132286; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=oMzPBnHkJEk4I1sTfs7iYXVAv7BO9UDWeNUPlFOQw40=; b=pwc9mIsTTE0JFcK+mY2D5Fs6sLyQtvYnZ2WYyflZfkHogfTdSwysGwl4UiFsyeIQfe RLNhGhxPkP/ttFld7srhEOwYL1sKNkbLQD6ZYw4Vgmip5Taw6FgFGx/WxjfD7ZPBrY+X QcjGjpNX5Ycx525Nsy1pxCKwd6h5PbX9xGZuZowaGgF8uTCbeNoiv0xgGNizEwe8N3HB 2uO11W1RnZJlxRO8fRdmaZwUZmOkoS7AcnTFwkMPwPzfQ27Pa7rEen86+mFHvsDyQsJ/ Ub2iCP7/o03rA13jic+cHILFu6kRXev7M+rQ8S9HzkMAEFf1CN1T96ahgodZ69Cbnt0t gMrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527486; x=1695132286; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oMzPBnHkJEk4I1sTfs7iYXVAv7BO9UDWeNUPlFOQw40=; b=lc31Omm9n6muCCaKU4HFkwnR6Xx1kPiywRSi54C07HiN1azzMQOu2BaIf55WTrxMd5 kWVcINh9qoM8Eks4zye6lx7zpxrcWeycVoNB8+I3+PDu6dhg3LAUcKSrvmAe4n3GF67H 8eOelUleXk3nczh//Td4GV+YjtZoUylCjBrr8DuLlqsiVJvX9DkKyG2W608kqNaYp5cW 4McxMKCb5FaFeEyDL+ojl/ZN3w09+F47tAxPak2whE7/sg9xuFb7vq3E2xPjTr5bhqbq ewg603WXg4Kg+ouIpEtEyanjG1cDyLCZ7Ca0VZZiaOCR/qcChFHGwrT6Ol9tY3w8KVbH W8Ew== X-Gm-Message-State: AOJu0YxCjyT/0V/KWlMXZSgeCMvY0KKu1fsndP1hMQmy7B/ntVJCn8L9 ka9eF0J7cYkvgovyfMZI4uTlRQ== X-Received: by 2002:adf:ecc8:0:b0:314:1ca4:dbd9 with SMTP id s8-20020adfecc8000000b003141ca4dbd9mr10171512wro.27.1694527484873; Tue, 12 Sep 2023 07:04:44 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:44 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 09/12] target/arm: Implement the SETG* instructions Date: Tue, 12 Sep 2023 15:04:31 +0100 Message-Id: <20230912140434.1333369-10-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::42e; envelope-from=peter.maydell@linaro.org; helo=mail-wr1-x42e.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org The FEAT_MOPS SETG* instructions are very similar to the SET* instructions, but as well as setting memory contents they also set the MTE tags. They are architecturally required to operate on tag-granule aligned regions only. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- v2: - separate helper functions calling do_setp/setm/sete - use cpu_st16_mmu() --- target/arm/internals.h | 10 ++++ target/arm/tcg/helper-a64.h | 3 ++ target/arm/tcg/a64.decode | 5 ++ target/arm/tcg/helper-a64.c | 86 ++++++++++++++++++++++++++++++++-- target/arm/tcg/mte_helper.c | 40 ++++++++++++++++ target/arm/tcg/translate-a64.c | 20 +++++--- 6 files changed, 155 insertions(+), 9 deletions(-) diff --git a/target/arm/internals.h b/target/arm/internals.h index a70a7fd50f6..642f77df29b 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1300,6 +1300,16 @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size, void mte_check_fail(CPUARMState *env, uint32_t desc, uint64_t dirty_ptr, uintptr_t ra); +/** + * mte_mops_set_tags: Set MTE tags for a portion of a FEAT_MOPS operation + * @env: CPU env + * @dirty_ptr: Start address of memory region (dirty pointer) + * @size: length of region (guaranteed not to cross page boundary) + * @desc: MTEDESC descriptor word + */ +void mte_mops_set_tags(CPUARMState *env, uint64_t dirty_ptr, uint64_t size, + uint32_t desc); + static inline int allocation_tag_from_addr(uint64_t ptr) { return extract64(ptr, 56, 4); diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h index 7ce5d2105ad..10a99107124 100644 --- a/target/arm/tcg/helper-a64.h +++ b/target/arm/tcg/helper-a64.h @@ -121,3 +121,6 @@ DEF_HELPER_FLAGS_4(unaligned_access, TCG_CALL_NO_WG, DEF_HELPER_3(setp, void, env, i32, i32) DEF_HELPER_3(setm, void, env, i32, i32) DEF_HELPER_3(sete, void, env, i32, i32) +DEF_HELPER_3(setgp, void, env, i32, i32) +DEF_HELPER_3(setgm, void, env, i32, i32) +DEF_HELPER_3(setge, void, env, i32, i32) diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode index c2a97328eeb..a202faa17bc 100644 --- a/target/arm/tcg/a64.decode +++ b/target/arm/tcg/a64.decode @@ -570,3 +570,8 @@ STZ2G 11011001 11 1 ......... 11 ..... ..... @ldst_tag p=0 w=1 SETP 00 011001110 ..... 00 . . 01 ..... ..... @set SETM 00 011001110 ..... 01 . . 01 ..... ..... @set SETE 00 011001110 ..... 10 . . 01 ..... ..... @set + +# Like SET, but also setting MTE tags +SETGP 00 011101110 ..... 00 . . 01 ..... ..... @set +SETGM 00 011101110 ..... 01 . . 01 ..... ..... @set +SETGE 00 011101110 ..... 10 . . 01 ..... ..... @set diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c index 24ae5ecf32e..2cf89184d77 100644 --- a/target/arm/tcg/helper-a64.c +++ b/target/arm/tcg/helper-a64.c @@ -1103,6 +1103,50 @@ static uint64_t set_step(CPUARMState *env, uint64_t toaddr, return setsize; } +/* + * Similar, but setting tags. The architecture requires us to do this + * in 16-byte chunks. SETP accesses are not tag checked; they set + * the tags. + */ +static uint64_t set_step_tags(CPUARMState *env, uint64_t toaddr, + uint64_t setsize, uint32_t data, int memidx, + uint32_t *mtedesc, uintptr_t ra) +{ + void *mem; + uint64_t cleanaddr; + + setsize = MIN(setsize, page_limit(toaddr)); + + cleanaddr = useronly_clean_ptr(toaddr); + /* + * Trapless lookup: returns NULL for invalid page, I/O, + * watchpoints, clean pages, etc. + */ + mem = tlb_vaddr_to_host(env, cleanaddr, MMU_DATA_STORE, memidx); + +#ifndef CONFIG_USER_ONLY + if (unlikely(!mem)) { + /* + * Slow-path: just do one write. This will handle the + * watchpoint, invalid page, etc handling correctly. + * The architecture requires that we do 16 bytes at a time, + * and we know both ptr and size are 16 byte aligned. + * For clean code pages, the next iteration will see + * the page dirty and will use the fast path. + */ + uint64_t repldata = data * 0x0101010101010101ULL; + MemOpIdx oi16 = make_memop_idx(MO_TE | MO_128, memidx); + cpu_st16_mmu(env, toaddr, int128_make128(repldata, repldata), oi16, ra); + mte_mops_set_tags(env, toaddr, 16, *mtedesc); + return 16; + } +#endif + /* Easy case: just memset the host memory */ + memset(mem, data, setsize); + mte_mops_set_tags(env, toaddr, setsize, *mtedesc); + return setsize; +} + typedef uint64_t StepFn(CPUARMState *env, uint64_t toaddr, uint64_t setsize, uint32_t data, int memidx, uint32_t *mtedesc, uintptr_t ra); @@ -1141,6 +1185,18 @@ static bool mte_checks_needed(uint64_t ptr, uint32_t desc) return !tcma_check(desc, bit55, allocation_tag_from_addr(ptr)); } +/* Take an exception if the SETG addr/size are not granule aligned */ +static void check_setg_alignment(CPUARMState *env, uint64_t ptr, uint64_t size, + uint32_t memidx, uintptr_t ra) +{ + if ((size != 0 && !QEMU_IS_ALIGNED(ptr, TAG_GRANULE)) || + !QEMU_IS_ALIGNED(size, TAG_GRANULE)) { + arm_cpu_do_unaligned_access(env_cpu(env), ptr, MMU_DATA_STORE, + memidx, ra); + + } +} + /* * For the Memory Set operation, our implementation chooses * always to use "option A", where we update Xd to the final @@ -1171,9 +1227,14 @@ static void do_setp(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc, if (setsize > INT64_MAX) { setsize = INT64_MAX; + if (is_setg) { + setsize &= ~0xf; + } } - if (!mte_checks_needed(toaddr, mtedesc)) { + if (unlikely(is_setg)) { + check_setg_alignment(env, toaddr, setsize, memidx, ra); + } else if (!mte_checks_needed(toaddr, mtedesc)) { mtedesc = 0; } @@ -1203,6 +1264,11 @@ void HELPER(setp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) do_setp(env, syndrome, mtedesc, set_step, false, GETPC()); } +void HELPER(setgp)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) +{ + do_setp(env, syndrome, mtedesc, set_step_tags, true, GETPC()); +} + static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc, StepFn *stepfn, bool is_setg, uintptr_t ra) { @@ -1237,7 +1303,9 @@ static void do_setm(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc, * have an IMPDEF check for alignment here. */ - if (!mte_checks_needed(toaddr, mtedesc)) { + if (unlikely(is_setg)) { + check_setg_alignment(env, toaddr, setsize, memidx, ra); + } else if (!mte_checks_needed(toaddr, mtedesc)) { mtedesc = 0; } @@ -1260,6 +1328,11 @@ void HELPER(setm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) do_setm(env, syndrome, mtedesc, set_step, false, GETPC()); } +void HELPER(setgm)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) +{ + do_setm(env, syndrome, mtedesc, set_step_tags, true, GETPC()); +} + static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc, StepFn *stepfn, bool is_setg, uintptr_t ra) { @@ -1295,7 +1368,9 @@ static void do_sete(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc, mops_mismatch_exception_target_el(env), ra); } - if (!mte_checks_needed(toaddr, mtedesc)) { + if (unlikely(is_setg)) { + check_setg_alignment(env, toaddr, setsize, memidx, ra); + } else if (!mte_checks_needed(toaddr, mtedesc)) { mtedesc = 0; } @@ -1312,3 +1387,8 @@ void HELPER(sete)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) { do_sete(env, syndrome, mtedesc, set_step, false, GETPC()); } + +void HELPER(setge)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) +{ + do_sete(env, syndrome, mtedesc, set_step_tags, true, GETPC()); +} diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c index 1cb61cea7af..66a80eeb950 100644 --- a/target/arm/tcg/mte_helper.c +++ b/target/arm/tcg/mte_helper.c @@ -1041,3 +1041,43 @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size, return n * TAG_GRANULE - (ptr - tag_first); } } + +void mte_mops_set_tags(CPUARMState *env, uint64_t ptr, uint64_t size, + uint32_t desc) +{ + int mmu_idx, tag_count; + uint64_t ptr_tag; + void *mem; + + if (!desc) { + /* Tags not actually enabled */ + return; + } + + mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX); + /* True probe: this will never fault */ + mem = allocation_tag_mem_probe(env, mmu_idx, ptr, MMU_DATA_STORE, size, + MMU_DATA_STORE, true, 0); + if (!mem) { + return; + } + + /* + * We know that ptr and size are both TAG_GRANULE aligned; store + * the tag from the pointer value into the tag memory. + */ + ptr_tag = allocation_tag_from_addr(ptr); + tag_count = size / TAG_GRANULE; + if (ptr & TAG_GRANULE) { + /* Not 2*TAG_GRANULE-aligned: store tag to first nibble */ + store_tag1_parallel(TAG_GRANULE, mem, ptr_tag); + mem++; + tag_count--; + } + memset(mem, ptr_tag | (ptr_tag << 4), tag_count / 2); + if (tag_count & 1) { + /* Final trailing unaligned nibble */ + mem += tag_count / 2; + store_tag1_parallel(0, mem, ptr_tag); + } +} diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c index da4aabbaf4e..27bb3039b4d 100644 --- a/target/arm/tcg/translate-a64.c +++ b/target/arm/tcg/translate-a64.c @@ -3964,11 +3964,16 @@ TRANS_FEAT(STZ2G, aa64_mte_insn_reg, do_STG, a, true, true) typedef void SetFn(TCGv_env, TCGv_i32, TCGv_i32); -static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn) +static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, + bool is_setg, SetFn fn) { int memidx; uint32_t syndrome, desc = 0; + if (is_setg && !dc_isar_feature(aa64_mte, s)) { + return false; + } + /* * UNPREDICTABLE cases: we choose to UNDEF, which allows * us to pull this check before the CheckMOPSEnabled() test @@ -3985,10 +3990,10 @@ static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn) * We pass option_a == true, matching our implementation; * we pass wrong_option == false: helper function may set that bit. */ - syndrome = syn_mop(true, false, (a->nontemp << 1) | a->unpriv, + syndrome = syn_mop(true, is_setg, (a->nontemp << 1) | a->unpriv, is_epilogue, false, true, a->rd, a->rs, a->rn); - if (s->mte_active[a->unpriv]) { + if (is_setg ? s->ata[a->unpriv] : s->mte_active[a->unpriv]) { /* We may need to do MTE tag checking, so assemble the descriptor */ desc = FIELD_DP32(desc, MTEDESC, TBI, s->tbid); desc = FIELD_DP32(desc, MTEDESC, TCMA, s->tcma); @@ -4007,9 +4012,12 @@ static bool do_SET(DisasContext *s, arg_set *a, bool is_epilogue, SetFn fn) return true; } -TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, gen_helper_setp) -TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, gen_helper_setm) -TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, gen_helper_sete) +TRANS_FEAT(SETP, aa64_mops, do_SET, a, false, false, gen_helper_setp) +TRANS_FEAT(SETM, aa64_mops, do_SET, a, false, false, gen_helper_setm) +TRANS_FEAT(SETE, aa64_mops, do_SET, a, true, false, gen_helper_sete) +TRANS_FEAT(SETGP, aa64_mops, do_SET, a, false, true, gen_helper_setgp) +TRANS_FEAT(SETGM, aa64_mops, do_SET, a, false, true, gen_helper_setgm) +TRANS_FEAT(SETGE, aa64_mops, do_SET, a, true, true, gen_helper_setge) typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64); From patchwork Tue Sep 12 14:04:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721770 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1664811wrp; Tue, 12 Sep 2023 07:06:46 -0700 (PDT) X-Google-Smtp-Source: AGHT+IFtBFHB1GcNryydrOIsC9enjMSZkV2nNN4c2oQ2umZNejQS4auev1/GyZF+xon7+gwEWM/1 X-Received: by 2002:a05:6214:410e:b0:651:6604:bee6 with SMTP id kc14-20020a056214410e00b006516604bee6mr2942430qvb.30.1694527606635; Tue, 12 Sep 2023 07:06:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527606; cv=none; d=google.com; s=arc-20160816; b=pDmsosWrW7GEMGiAfcvkJlTMSHjwh38KskhX0adklWWu/YpRgh93ux6agi+0Dmuurl Gx5l0N0cnRnF1o51+7WKsyZ0EeJ8HOkzKuJNypZV6tRuby6w8FTYs7wee+oCFy82j00e zErsUh1c6rFGMzXdutMGw4cFKERz/i3qA7eqLF7kTDNJfb70yyOaG+GUb1TztW1SxXYk /Qoq1ru8aE7rctIzNr6kgrQlPKO6vgoAYcUaJXd8UcR2Zv16OXzF4QuMA3PbEWDTQ83w 4khhC7P30xWj162o7uGsv8R3LvoSOBEJRMqh060syIc3s3y9hCHiokz6vBDoKkNJmpWy rtOw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=S+8+Et63yYSKmNFzWkggTY5fobUvz34RO94xWiLCiyk=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=rQCKF6OUIydjmEqTxlB0ECZVnZPhSamZS+5bU00nY6sSon8WHJhW36LV3t5PCDg6Z6 fg0zYMnBNkugmygU3DRVgyrPtla/bojKW3ckBDh21//kodLpDtNUPGswA0pGhCfUmH2U +CrM0XzgPOGH9mR4lOz+t+dvr18pa3VzDAJFosPs7GWdNTSRW+VsZa5Zp/lAQ/mUc3dt 9gCPNHv1bu4fAuOvjNaF7MXFdEu+qxHnPsFuquS2zDBz8HpZ/5D/3WlLxDGebt8nS1fJ 0b5PNg3y1DGY++PySNPz3UG5vH/rjzsIwL+wTj+uHXEgxI2MbmxuWJ60RO9ikQwS1mP9 ColQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="NM/3t5nL"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id ql10-20020a05620a890a00b0076da008f6ccsi6220396qkn.288.2023.09.12.07.06.46 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:06:46 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b="NM/3t5nL"; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg414-0008UR-9m; Tue, 12 Sep 2023 10:05:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40q-0008GT-SA for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:10 -0400 Received: from mail-wr1-x435.google.com ([2a00:1450:4864:20::435]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40k-0003rR-Tv for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:08 -0400 Received: by mail-wr1-x435.google.com with SMTP id ffacd0b85a97d-31fa666000dso2247695f8f.2 for ; Tue, 12 Sep 2023 07:04:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527487; x=1695132287; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=S+8+Et63yYSKmNFzWkggTY5fobUvz34RO94xWiLCiyk=; b=NM/3t5nL2XKJtBmQauRoHQS61eewxLBSP4wq3ctcEIkfjvQe07JLVXLcikcnWyCZ9Q kcP1psmMF7RInuC+uLDRKbJDb9LfyQJFzUq8Yogrgs1LubXv/AMze1CTef8WjoPnt4bN 6o8j+ymuxt6rqSrS1JPW8ydyzyzpHL3Nn1kmw6pC7Qc+3lz/DKrElfDt4sjcMIQQlg38 QCep4df76kTbx17JmKrVDmZp+P28KQDU0IUx2qwspYsXd66VwMOEQQdiCyWcCk949hPu wjyFpZ0WV5J+3mWZjWGEyz555S8XYrDo99iZ0cA/ZKWRXbbXrN5LDZ1Z5eCVPjEgJnAT yBog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527487; x=1695132287; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=S+8+Et63yYSKmNFzWkggTY5fobUvz34RO94xWiLCiyk=; b=YqsrOtnHWCA2pcW4yHDCD7adBxaenU4un9fEfBqrQu7NsWxtnS0gwS0accBoGwfCC9 JZzeWn5Mjet0UDBeoLRdghLVM/1LmOsDmU3BF2S81F1m4vRHL1YyW40/D0zoKpCOs3Qe oQbdb/YRQaL9hI7O3DTaYXyLb3vUKS6aPHpxWCbZuuSwfsDg0mdrcd3Icr5cDc2lnLdH 2ItM/Ja5pyoRBz1+TMB0xqmNd8lc2sfIYQyOuYZsplVCcU7QiqF6hHDOCTr1FRc3mzUM spAK+T8RYgQYx6DAuRolfbNaAcE3AG/rFEevtxZyDXGlRRbh6rwHOMe2TSsyZ2fntUQw EpHA== X-Gm-Message-State: AOJu0YxiK1KmnUxImBmndze+hec2xsjcVIVFaPfAvlWxyXEmljOzwe+O LeP6yFfkG/ORnhM0L991k//D7g== X-Received: by 2002:adf:e50b:0:b0:317:5b32:b2c3 with SMTP id j11-20020adfe50b000000b003175b32b2c3mr9613927wrm.6.1694527486196; Tue, 12 Sep 2023 07:04:46 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:45 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 10/12] target/arm: Implement MTE tag-checking functions for FEAT_MOPS copies Date: Tue, 12 Sep 2023 15:04:32 +0100 Message-Id: <20230912140434.1333369-11-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::435; envelope-from=peter.maydell@linaro.org; helo=mail-wr1-x435.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org The FEAT_MOPS memory copy operations need an extra helper routine for checking for MTE tag checking failures beyond the ones we already added for memory set operations: * mte_mops_probe_rev() does the same job as mte_mops_probe(), but it checks tags starting at the provided address and working backwards, rather than forwards Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- target/arm/internals.h | 17 +++++++ target/arm/tcg/mte_helper.c | 99 +++++++++++++++++++++++++++++++++++++ 2 files changed, 116 insertions(+) diff --git a/target/arm/internals.h b/target/arm/internals.h index 642f77df29b..1dd9182a54a 100644 --- a/target/arm/internals.h +++ b/target/arm/internals.h @@ -1288,6 +1288,23 @@ uint64_t mte_check(CPUARMState *env, uint32_t desc, uint64_t ptr, uintptr_t ra); uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size, uint32_t desc); +/** + * mte_mops_probe_rev: Check where the next MTE failure is for a FEAT_MOPS + * operation going in the reverse direction + * @env: CPU env + * @ptr: *end* address of memory region (dirty pointer) + * @size: length of region (guaranteed not to cross a page boundary) + * @desc: MTEDESC descriptor word (0 means no MTE checks) + * Returns: the size of the region that can be copied without hitting + * an MTE tag failure + * + * Note that we assume that the caller has already checked the TBI + * and TCMA bits with mte_checks_needed() and an MTE check is definitely + * required. + */ +uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size, + uint32_t desc); + /** * mte_check_fail: Record an MTE tag check failure * @env: CPU env diff --git a/target/arm/tcg/mte_helper.c b/target/arm/tcg/mte_helper.c index 66a80eeb950..2dd7eb3edbf 100644 --- a/target/arm/tcg/mte_helper.c +++ b/target/arm/tcg/mte_helper.c @@ -734,6 +734,55 @@ static int checkN(uint8_t *mem, int odd, int cmp, int count) return n; } +/** + * checkNrev: + * @tag: tag memory to test + * @odd: true to begin testing at tags at odd nibble + * @cmp: the tag to compare against + * @count: number of tags to test + * + * Return the number of successful tests. + * Thus a return value < @count indicates a failure. + * + * This is like checkN, but it runs backwards, checking the + * tags starting with @tag and then the tags preceding it. + * This is needed by the backwards-memory-copying operations. + */ +static int checkNrev(uint8_t *mem, int odd, int cmp, int count) +{ + int n = 0, diff; + + /* Replicate the test tag and compare. */ + cmp *= 0x11; + diff = *mem-- ^ cmp; + + if (!odd) { + goto start_even; + } + + while (1) { + /* Test odd tag. */ + if (unlikely((diff) & 0xf0)) { + break; + } + if (++n == count) { + break; + } + + start_even: + /* Test even tag. */ + if (unlikely((diff) & 0x0f)) { + break; + } + if (++n == count) { + break; + } + + diff = *mem-- ^ cmp; + } + return n; +} + /** * mte_probe_int() - helper for mte_probe and mte_check * @env: CPU environment @@ -1042,6 +1091,56 @@ uint64_t mte_mops_probe(CPUARMState *env, uint64_t ptr, uint64_t size, } } +uint64_t mte_mops_probe_rev(CPUARMState *env, uint64_t ptr, uint64_t size, + uint32_t desc) +{ + int mmu_idx, tag_count; + uint64_t ptr_tag, tag_first, tag_last; + void *mem; + bool w = FIELD_EX32(desc, MTEDESC, WRITE); + uint32_t n; + + mmu_idx = FIELD_EX32(desc, MTEDESC, MIDX); + /* True probe; this will never fault */ + mem = allocation_tag_mem_probe(env, mmu_idx, ptr, + w ? MMU_DATA_STORE : MMU_DATA_LOAD, + size, MMU_DATA_LOAD, true, 0); + if (!mem) { + return size; + } + + /* + * TODO: checkNrev() is not designed for checks of the size we expect + * for FEAT_MOPS operations, so we should implement this differently. + * Maybe we should do something like + * if (region start and size are aligned nicely) { + * do direct loads of 64 tag bits at a time; + * } else { + * call checkN() + * } + */ + /* Round the bounds to the tag granule, and compute the number of tags. */ + ptr_tag = allocation_tag_from_addr(ptr); + tag_first = QEMU_ALIGN_DOWN(ptr - (size - 1), TAG_GRANULE); + tag_last = QEMU_ALIGN_DOWN(ptr, TAG_GRANULE); + tag_count = ((tag_last - tag_first) / TAG_GRANULE) + 1; + n = checkNrev(mem, ptr & TAG_GRANULE, ptr_tag, tag_count); + if (likely(n == tag_count)) { + return size; + } + + /* + * Failure; for the first granule, it's at @ptr. Otherwise + * it's at the last byte of the nth granule. Calculate how + * many bytes we can access without hitting that failure. + */ + if (n == 0) { + return 0; + } else { + return (n - 1) * TAG_GRANULE + ((ptr + 1) - tag_last); + } +} + void mte_mops_set_tags(CPUARMState *env, uint64_t ptr, uint64_t size, uint32_t desc) { From patchwork Tue Sep 12 14:04:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721771 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1664969wrp; Tue, 12 Sep 2023 07:06:59 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEi7Jyj8NS7YaE+yK+z5ghZIJiygJ2OsLQJj2k0fN9iHsmSNUBgHt8qDtaWBqfFxGMCB+GH X-Received: by 2002:a05:6808:3ce:b0:3a1:db18:401e with SMTP id o14-20020a05680803ce00b003a1db18401emr14277543oie.1.1694527619452; Tue, 12 Sep 2023 07:06:59 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527619; cv=none; d=google.com; s=arc-20160816; b=ARaUravApYBBrRl+4ud4fim5OEKYwglG0uveJVjklie9Yb4d7Jc9GVD2IILc87iRlh 8wO0Giob3/e8vrydolfEU+yKZkOLaQWaj5qATRs0Dbjb6oa72Igc3ur+frXqW1Es7afA 7WYgfq2sXkTcM+WD44tFTN1tHW0QKdDo3RvOAy6+RGmJMxWT7riovDT5eVgzbWgOy2m0 lFaTvGvz1M32+iY2xqOA5cPbtc12n9PM5fEc/I4ySY9U+DkI7HZ3usjJbCgT6rC7LOVF znqEPiy2ElPfTzg19Euncj1L2WhMHAdW05ku5EVOIJSINGB89L8ALzI5ZAhk7X8LEkI6 XbPw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=aeysYJQ2wqT+AFiNRPHOE3vBLD8aAib3M7NE5uvM6vY=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=y6Qm7RMDnnR6IY4HHKWoihcZ3Vd9Kv0EzTrFub3D3d6XpHd4TFLsUL9ZUBdunyR52F C4JEZqx6uV8tknHrDG/JNYxtTZUfadlQpIQZXT9t08vqVEk0u5uxUXdX6XlC8H/4Y4fc Esmpt6uq5PAwW4kNhInzAWLPmPGQbQ5zRRacB86+E+H9BL25sj8T46NWgbTuAzbF37AM aRkem2M3jUc2xfon0hAu/p3seLxy9COGis1rHXIbTzQPyr7XUsbphkh7B+osA9X3wXhU zSuLmj86Yr20dG9ipv2JKZE6PZ83vbj9YNGf8No4nLT7FsjyBgZbv+bZCxnFcvVitbrJ oTpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=z6G1eEKo; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id q28-20020a67cc1c000000b0044e88456815si822711vsl.806.2023.09.12.07.06.58 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:06:59 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=z6G1eEKo; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg418-00009R-MW; Tue, 12 Sep 2023 10:05:26 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40q-0008GH-90 for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:10 -0400 Received: from mail-wm1-x332.google.com ([2a00:1450:4864:20::332]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40h-0003ry-NW for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:06 -0400 Received: by mail-wm1-x332.google.com with SMTP id 5b1f17b1804b1-401d10e3e54so61704215e9.2 for ; Tue, 12 Sep 2023 07:04:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527489; x=1695132289; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=aeysYJQ2wqT+AFiNRPHOE3vBLD8aAib3M7NE5uvM6vY=; b=z6G1eEKolXnZq1OEQxOLOi4xEAha17GC2iuoGpJSuxZy9lUipuwhrMf4keq0zfhkY4 umOIKSCkFWQOCdlcTonraBUiyhFGzXOMRVILH7LU9HK6TYlpdO7lLJOxqekb9T+HiwAs 0cBivnBrfgoKmEE2eP2Vk+Zr0o2cFh1XqKa7+Xw1Y+oz9LS1hu/lvoXjT2LNnV4WLn3n Yj6JQyof3jGd6vxCIvR30PT1MCezRtBONgC61RRIQnTgblEYb0xlH3qSsfh1s+h3n53E r9YqRAUCx1T+lZtOBb2JxVby0rAPf3K3mo2NlyYR1UCcnVe6rQVez7tPScQoK5gWpAsW K1NA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527489; x=1695132289; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aeysYJQ2wqT+AFiNRPHOE3vBLD8aAib3M7NE5uvM6vY=; b=S8ET8nRhUcJMfdL8MwYep4S1kaCFGG/NpGSIr2qIf5x/jByEaXWBdBjcuKE7LDoqGo PeSTuQ0YgKgScY4F66VaBxH2t+brtU/xadICz2gMtdt5x6S3hGw5tt95QwsHkW0xLdoE 5nR9+i+dRpr1jTXXG4cqKYA67HUCwAxxtEOFNMRB1AcjeaGLAagzprTqXmf7Ol6J7F+o S+gmttEuIblmwhqlcJy66x0MwUhPdu3zOnFOpwMv9QywenVBfv1CwSgPuaAULgk+li5J iU1EUtJiBsufUC7wTAtGB+qnzFzMmEdVyV8eqU0nhK5fpnRYer3umPludOvo2gbWoLos 10Gw== X-Gm-Message-State: AOJu0Yw/q5uMZH/eQnvLOuLg6BEed1VqZ3n28SJimHAVyV9hqXIaNLCP xUQAqioH1QGlypulSWYuHH220dvqtFNqOurJ6RI= X-Received: by 2002:a5d:4c8d:0:b0:314:3c84:4da2 with SMTP id z13-20020a5d4c8d000000b003143c844da2mr9360802wrs.13.1694527487632; Tue, 12 Sep 2023 07:04:47 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:47 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 11/12] target/arm: Implement the CPY* instructions Date: Tue, 12 Sep 2023 15:04:33 +0100 Message-Id: <20230912140434.1333369-12-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::332; envelope-from=peter.maydell@linaro.org; helo=mail-wm1-x332.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org The FEAT_MOPS CPY* instructions implement memory copies. These come in both "always forwards" (memcpy-style) and "overlap OK" (memmove-style) flavours. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- v2: - separate helpers for the 'forwards' and 'move' variants - fix cpyfp saturation limit - cpyfm/cpyfp are always forwards, not based on Xn sign --- target/arm/tcg/helper-a64.h | 7 + target/arm/tcg/a64.decode | 14 + target/arm/tcg/helper-a64.c | 454 +++++++++++++++++++++++++++++++++ target/arm/tcg/translate-a64.c | 60 +++++ 4 files changed, 535 insertions(+) diff --git a/target/arm/tcg/helper-a64.h b/target/arm/tcg/helper-a64.h index 10a99107124..575a5dab7dc 100644 --- a/target/arm/tcg/helper-a64.h +++ b/target/arm/tcg/helper-a64.h @@ -124,3 +124,10 @@ DEF_HELPER_3(sete, void, env, i32, i32) DEF_HELPER_3(setgp, void, env, i32, i32) DEF_HELPER_3(setgm, void, env, i32, i32) DEF_HELPER_3(setge, void, env, i32, i32) + +DEF_HELPER_4(cpyp, void, env, i32, i32, i32) +DEF_HELPER_4(cpym, void, env, i32, i32, i32) +DEF_HELPER_4(cpye, void, env, i32, i32, i32) +DEF_HELPER_4(cpyfp, void, env, i32, i32, i32) +DEF_HELPER_4(cpyfm, void, env, i32, i32, i32) +DEF_HELPER_4(cpyfe, void, env, i32, i32, i32) diff --git a/target/arm/tcg/a64.decode b/target/arm/tcg/a64.decode index a202faa17bc..0cf11470741 100644 --- a/target/arm/tcg/a64.decode +++ b/target/arm/tcg/a64.decode @@ -575,3 +575,17 @@ SETE 00 011001110 ..... 10 . . 01 ..... ..... @set SETGP 00 011101110 ..... 00 . . 01 ..... ..... @set SETGM 00 011101110 ..... 01 . . 01 ..... ..... @set SETGE 00 011101110 ..... 10 . . 01 ..... ..... @set + +# Memmove/Memcopy: the CPY insns allow overlapping src/dest and +# copy in the correct direction; the CPYF insns always copy forwards. +# +# options has the nontemporal and unpriv bits for src and dest +&cpy rs rn rd options +@cpy .. ... . ..... rs:5 options:4 .. rn:5 rd:5 &cpy + +CPYFP 00 011 0 01000 ..... .... 01 ..... ..... @cpy +CPYFM 00 011 0 01010 ..... .... 01 ..... ..... @cpy +CPYFE 00 011 0 01100 ..... .... 01 ..... ..... @cpy +CPYP 00 011 1 01000 ..... .... 01 ..... ..... @cpy +CPYM 00 011 1 01010 ..... .... 01 ..... ..... @cpy +CPYE 00 011 1 01100 ..... .... 01 ..... ..... @cpy diff --git a/target/arm/tcg/helper-a64.c b/target/arm/tcg/helper-a64.c index 2cf89184d77..84f54750fc2 100644 --- a/target/arm/tcg/helper-a64.c +++ b/target/arm/tcg/helper-a64.c @@ -1048,6 +1048,15 @@ static uint64_t page_limit(uint64_t addr) return TARGET_PAGE_ALIGN(addr + 1) - addr; } +/* + * Return the number of bytes we can copy starting from addr and working + * backwards without crossing a page boundary. + */ +static uint64_t page_limit_rev(uint64_t addr) +{ + return (addr & ~TARGET_PAGE_MASK) + 1; +} + /* * Perform part of a memory set on an area of guest memory starting at * toaddr (a dirty address) and extending for setsize bytes. @@ -1392,3 +1401,448 @@ void HELPER(setge)(CPUARMState *env, uint32_t syndrome, uint32_t mtedesc) { do_sete(env, syndrome, mtedesc, set_step_tags, true, GETPC()); } + +/* + * Perform part of a memory copy from the guest memory at fromaddr + * and extending for copysize bytes, to the guest memory at + * toaddr. Both addreses are dirty. + * + * Returns the number of bytes actually set, which might be less than + * copysize; the caller should loop until the whole copy has been done. + * The caller should ensure that the guest registers are correct + * for the possibility that the first byte of the copy encounters + * an exception or watchpoint. We guarantee not to take any faults + * for bytes other than the first. + */ +static uint64_t copy_step(CPUARMState *env, uint64_t toaddr, uint64_t fromaddr, + uint64_t copysize, int wmemidx, int rmemidx, + uint32_t *wdesc, uint32_t *rdesc, uintptr_t ra) +{ + void *rmem; + void *wmem; + + /* Don't cross a page boundary on either source or destination */ + copysize = MIN(copysize, page_limit(toaddr)); + copysize = MIN(copysize, page_limit(fromaddr)); + /* + * Handle MTE tag checks: either handle the tag mismatch for byte 0, + * or else copy up to but not including the byte with the mismatch. + */ + if (*rdesc) { + uint64_t mtesize = mte_mops_probe(env, fromaddr, copysize, *rdesc); + if (mtesize == 0) { + mte_check_fail(env, *rdesc, fromaddr, ra); + *rdesc = 0; + } else { + copysize = MIN(copysize, mtesize); + } + } + if (*wdesc) { + uint64_t mtesize = mte_mops_probe(env, toaddr, copysize, *wdesc); + if (mtesize == 0) { + mte_check_fail(env, *wdesc, toaddr, ra); + *wdesc = 0; + } else { + copysize = MIN(copysize, mtesize); + } + } + + toaddr = useronly_clean_ptr(toaddr); + fromaddr = useronly_clean_ptr(fromaddr); + /* Trapless lookup of whether we can get a host memory pointer */ + wmem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, wmemidx); + rmem = tlb_vaddr_to_host(env, fromaddr, MMU_DATA_LOAD, rmemidx); + +#ifndef CONFIG_USER_ONLY + /* + * If we don't have host memory for both source and dest then just + * do a single byte copy. This will handle watchpoints, invalid pages, + * etc correctly. For clean code pages, the next iteration will see + * the page dirty and will use the fast path. + */ + if (unlikely(!rmem || !wmem)) { + uint8_t byte; + if (rmem) { + byte = *(uint8_t *)rmem; + } else { + byte = cpu_ldub_mmuidx_ra(env, fromaddr, rmemidx, ra); + } + if (wmem) { + *(uint8_t *)wmem = byte; + } else { + cpu_stb_mmuidx_ra(env, toaddr, byte, wmemidx, ra); + } + return 1; + } +#endif + /* Easy case: just memmove the host memory */ + memmove(wmem, rmem, copysize); + return copysize; +} + +/* + * Do part of a backwards memory copy. Here toaddr and fromaddr point + * to the *last* byte to be copied. + */ +static uint64_t copy_step_rev(CPUARMState *env, uint64_t toaddr, + uint64_t fromaddr, + uint64_t copysize, int wmemidx, int rmemidx, + uint32_t *wdesc, uint32_t *rdesc, uintptr_t ra) +{ + void *rmem; + void *wmem; + + /* Don't cross a page boundary on either source or destination */ + copysize = MIN(copysize, page_limit_rev(toaddr)); + copysize = MIN(copysize, page_limit_rev(fromaddr)); + + /* + * Handle MTE tag checks: either handle the tag mismatch for byte 0, + * or else copy up to but not including the byte with the mismatch. + */ + if (*rdesc) { + uint64_t mtesize = mte_mops_probe_rev(env, fromaddr, copysize, *rdesc); + if (mtesize == 0) { + mte_check_fail(env, *rdesc, fromaddr, ra); + *rdesc = 0; + } else { + copysize = MIN(copysize, mtesize); + } + } + if (*wdesc) { + uint64_t mtesize = mte_mops_probe_rev(env, toaddr, copysize, *wdesc); + if (mtesize == 0) { + mte_check_fail(env, *wdesc, toaddr, ra); + *wdesc = 0; + } else { + copysize = MIN(copysize, mtesize); + } + } + + toaddr = useronly_clean_ptr(toaddr); + fromaddr = useronly_clean_ptr(fromaddr); + /* Trapless lookup of whether we can get a host memory pointer */ + wmem = tlb_vaddr_to_host(env, toaddr, MMU_DATA_STORE, wmemidx); + rmem = tlb_vaddr_to_host(env, fromaddr, MMU_DATA_LOAD, rmemidx); + +#ifndef CONFIG_USER_ONLY + /* + * If we don't have host memory for both source and dest then just + * do a single byte copy. This will handle watchpoints, invalid pages, + * etc correctly. For clean code pages, the next iteration will see + * the page dirty and will use the fast path. + */ + if (unlikely(!rmem || !wmem)) { + uint8_t byte; + if (rmem) { + byte = *(uint8_t *)rmem; + } else { + byte = cpu_ldub_mmuidx_ra(env, fromaddr, rmemidx, ra); + } + if (wmem) { + *(uint8_t *)wmem = byte; + } else { + cpu_stb_mmuidx_ra(env, toaddr, byte, wmemidx, ra); + } + return 1; + } +#endif + /* + * Easy case: just memmove the host memory. Note that wmem and + * rmem here point to the *last* byte to copy. + */ + memmove(wmem - (copysize - 1), rmem - (copysize - 1), copysize); + return copysize; +} + +/* + * for the Memory Copy operation, our implementation chooses always + * to use "option A", where we update Xd and Xs to the final addresses + * in the CPYP insn, and then in CPYM and CPYE only need to update Xn. + * + * @env: CPU + * @syndrome: syndrome value for mismatch exceptions + * (also contains the register numbers we need to use) + * @wdesc: MTE descriptor for the writes (destination) + * @rdesc: MTE descriptor for the reads (source) + * @move: true if this is CPY (memmove), false for CPYF (memcpy forwards) + */ +static void do_cpyp(CPUARMState *env, uint32_t syndrome, uint32_t wdesc, + uint32_t rdesc, uint32_t move, uintptr_t ra) +{ + int rd = mops_destreg(syndrome); + int rs = mops_srcreg(syndrome); + int rn = mops_sizereg(syndrome); + uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX); + uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX); + bool forwards = true; + uint64_t toaddr = env->xregs[rd]; + uint64_t fromaddr = env->xregs[rs]; + uint64_t copysize = env->xregs[rn]; + uint64_t stagecopysize, step; + + check_mops_enabled(env, ra); + + + if (move) { + /* + * Copy backwards if necessary. The direction for a non-overlapping + * copy is IMPDEF; we choose forwards. + */ + if (copysize > 0x007FFFFFFFFFFFFFULL) { + copysize = 0x007FFFFFFFFFFFFFULL; + } + uint64_t fs = extract64(fromaddr, 0, 56); + uint64_t ts = extract64(toaddr, 0, 56); + uint64_t fe = extract64(fromaddr + copysize, 0, 56); + + if (fs < ts && fe > ts) { + forwards = false; + } + } else { + if (copysize > INT64_MAX) { + copysize = INT64_MAX; + } + } + + if (!mte_checks_needed(fromaddr, rdesc)) { + rdesc = 0; + } + if (!mte_checks_needed(toaddr, wdesc)) { + wdesc = 0; + } + + if (forwards) { + stagecopysize = MIN(copysize, page_limit(toaddr)); + stagecopysize = MIN(stagecopysize, page_limit(fromaddr)); + while (stagecopysize) { + env->xregs[rd] = toaddr; + env->xregs[rs] = fromaddr; + env->xregs[rn] = copysize; + step = copy_step(env, toaddr, fromaddr, stagecopysize, + wmemidx, rmemidx, &wdesc, &rdesc, ra); + toaddr += step; + fromaddr += step; + copysize -= step; + stagecopysize -= step; + } + /* Insn completed, so update registers to the Option A format */ + env->xregs[rd] = toaddr + copysize; + env->xregs[rs] = fromaddr + copysize; + env->xregs[rn] = -copysize; + } else { + /* + * In a reverse copy the to and from addrs in Xs and Xd are the start + * of the range, but it's more convenient for us to work with pointers + * to the last byte being copied. + */ + toaddr += copysize - 1; + fromaddr += copysize - 1; + stagecopysize = MIN(copysize, page_limit_rev(toaddr)); + stagecopysize = MIN(stagecopysize, page_limit_rev(fromaddr)); + while (stagecopysize) { + env->xregs[rn] = copysize; + step = copy_step_rev(env, toaddr, fromaddr, stagecopysize, + wmemidx, rmemidx, &wdesc, &rdesc, ra); + copysize -= step; + stagecopysize -= step; + toaddr -= step; + fromaddr -= step; + } + /* + * Insn completed, so update registers to the Option A format. + * For a reverse copy this is no different to the CPYP input format. + */ + env->xregs[rn] = copysize; + } + + /* Set NZCV = 0000 to indicate we are an Option A implementation */ + env->NF = 0; + env->ZF = 1; /* our env->ZF encoding is inverted */ + env->CF = 0; + env->VF = 0; + return; +} + +void HELPER(cpyp)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc, + uint32_t rdesc) +{ + do_cpyp(env, syndrome, wdesc, rdesc, true, GETPC()); +} + +void HELPER(cpyfp)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc, + uint32_t rdesc) +{ + do_cpyp(env, syndrome, wdesc, rdesc, false, GETPC()); +} + +static void do_cpym(CPUARMState *env, uint32_t syndrome, uint32_t wdesc, + uint32_t rdesc, uint32_t move, uintptr_t ra) +{ + /* Main: we choose to copy until less than a page remaining */ + CPUState *cs = env_cpu(env); + int rd = mops_destreg(syndrome); + int rs = mops_srcreg(syndrome); + int rn = mops_sizereg(syndrome); + uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX); + uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX); + bool forwards = true; + uint64_t toaddr, fromaddr, copysize, step; + + check_mops_enabled(env, ra); + + /* We choose to NOP out "no data to copy" before consistency checks */ + if (env->xregs[rn] == 0) { + return; + } + + check_mops_wrong_option(env, syndrome, ra); + + if (move) { + forwards = (int64_t)env->xregs[rn] < 0; + } + + if (forwards) { + toaddr = env->xregs[rd] + env->xregs[rn]; + fromaddr = env->xregs[rs] + env->xregs[rn]; + copysize = -env->xregs[rn]; + } else { + copysize = env->xregs[rn]; + /* This toaddr and fromaddr point to the *last* byte to copy */ + toaddr = env->xregs[rd] + copysize - 1; + fromaddr = env->xregs[rs] + copysize - 1; + } + + if (!mte_checks_needed(fromaddr, rdesc)) { + rdesc = 0; + } + if (!mte_checks_needed(toaddr, wdesc)) { + wdesc = 0; + } + + /* Our implementation has no particular parameter requirements for CPYM */ + + /* Do the actual memmove */ + if (forwards) { + while (copysize >= TARGET_PAGE_SIZE) { + step = copy_step(env, toaddr, fromaddr, copysize, + wmemidx, rmemidx, &wdesc, &rdesc, ra); + toaddr += step; + fromaddr += step; + copysize -= step; + env->xregs[rn] = -copysize; + if (copysize >= TARGET_PAGE_SIZE && + unlikely(cpu_loop_exit_requested(cs))) { + cpu_loop_exit_restore(cs, ra); + } + } + } else { + while (copysize >= TARGET_PAGE_SIZE) { + step = copy_step_rev(env, toaddr, fromaddr, copysize, + wmemidx, rmemidx, &wdesc, &rdesc, ra); + toaddr -= step; + fromaddr -= step; + copysize -= step; + env->xregs[rn] = copysize; + if (copysize >= TARGET_PAGE_SIZE && + unlikely(cpu_loop_exit_requested(cs))) { + cpu_loop_exit_restore(cs, ra); + } + } + } +} + +void HELPER(cpym)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc, + uint32_t rdesc) +{ + do_cpym(env, syndrome, wdesc, rdesc, true, GETPC()); +} + +void HELPER(cpyfm)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc, + uint32_t rdesc) +{ + do_cpym(env, syndrome, wdesc, rdesc, false, GETPC()); +} + +static void do_cpye(CPUARMState *env, uint32_t syndrome, uint32_t wdesc, + uint32_t rdesc, uint32_t move, uintptr_t ra) +{ + /* Epilogue: do the last partial page */ + int rd = mops_destreg(syndrome); + int rs = mops_srcreg(syndrome); + int rn = mops_sizereg(syndrome); + uint32_t rmemidx = FIELD_EX32(rdesc, MTEDESC, MIDX); + uint32_t wmemidx = FIELD_EX32(wdesc, MTEDESC, MIDX); + bool forwards = true; + uint64_t toaddr, fromaddr, copysize, step; + + check_mops_enabled(env, ra); + + /* We choose to NOP out "no data to copy" before consistency checks */ + if (env->xregs[rn] == 0) { + return; + } + + check_mops_wrong_option(env, syndrome, ra); + + if (move) { + forwards = (int64_t)env->xregs[rn] < 0; + } + + if (forwards) { + toaddr = env->xregs[rd] + env->xregs[rn]; + fromaddr = env->xregs[rs] + env->xregs[rn]; + copysize = -env->xregs[rn]; + } else { + copysize = env->xregs[rn]; + /* This toaddr and fromaddr point to the *last* byte to copy */ + toaddr = env->xregs[rd] + copysize - 1; + fromaddr = env->xregs[rs] + copysize - 1; + } + + if (!mte_checks_needed(fromaddr, rdesc)) { + rdesc = 0; + } + if (!mte_checks_needed(toaddr, wdesc)) { + wdesc = 0; + } + + /* Check the size; we don't want to have do a check-for-interrupts */ + if (copysize >= TARGET_PAGE_SIZE) { + raise_exception_ra(env, EXCP_UDEF, syndrome, + mops_mismatch_exception_target_el(env), ra); + } + + /* Do the actual memmove */ + if (forwards) { + while (copysize > 0) { + step = copy_step(env, toaddr, fromaddr, copysize, + wmemidx, rmemidx, &wdesc, &rdesc, ra); + toaddr += step; + fromaddr += step; + copysize -= step; + env->xregs[rn] = -copysize; + } + } else { + while (copysize > 0) { + step = copy_step_rev(env, toaddr, fromaddr, copysize, + wmemidx, rmemidx, &wdesc, &rdesc, ra); + toaddr -= step; + fromaddr -= step; + copysize -= step; + env->xregs[rn] = copysize; + } + } +} + +void HELPER(cpye)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc, + uint32_t rdesc) +{ + do_cpye(env, syndrome, wdesc, rdesc, true, GETPC()); +} + +void HELPER(cpyfe)(CPUARMState *env, uint32_t syndrome, uint32_t wdesc, + uint32_t rdesc) +{ + do_cpye(env, syndrome, wdesc, rdesc, false, GETPC()); +} diff --git a/target/arm/tcg/translate-a64.c b/target/arm/tcg/translate-a64.c index 27bb3039b4d..97f25b4451c 100644 --- a/target/arm/tcg/translate-a64.c +++ b/target/arm/tcg/translate-a64.c @@ -4019,6 +4019,66 @@ TRANS_FEAT(SETGP, aa64_mops, do_SET, a, false, true, gen_helper_setgp) TRANS_FEAT(SETGM, aa64_mops, do_SET, a, false, true, gen_helper_setgm) TRANS_FEAT(SETGE, aa64_mops, do_SET, a, true, true, gen_helper_setge) +typedef void CpyFn(TCGv_env, TCGv_i32, TCGv_i32, TCGv_i32); + +static bool do_CPY(DisasContext *s, arg_cpy *a, bool is_epilogue, CpyFn fn) +{ + int rmemidx, wmemidx; + uint32_t syndrome, rdesc = 0, wdesc = 0; + bool wunpriv = extract32(a->options, 0, 1); + bool runpriv = extract32(a->options, 1, 1); + + /* + * UNPREDICTABLE cases: we choose to UNDEF, which allows + * us to pull this check before the CheckMOPSEnabled() test + * (which we do in the helper function) + */ + if (a->rs == a->rn || a->rs == a->rd || a->rn == a->rd || + a->rd == 31 || a->rs == 31 || a->rn == 31) { + return false; + } + + rmemidx = get_a64_user_mem_index(s, runpriv); + wmemidx = get_a64_user_mem_index(s, wunpriv); + + /* + * We pass option_a == true, matching our implementation; + * we pass wrong_option == false: helper function may set that bit. + */ + syndrome = syn_mop(false, false, a->options, is_epilogue, + false, true, a->rd, a->rs, a->rn); + + /* If we need to do MTE tag checking, assemble the descriptors */ + if (s->mte_active[runpriv]) { + rdesc = FIELD_DP32(rdesc, MTEDESC, TBI, s->tbid); + rdesc = FIELD_DP32(rdesc, MTEDESC, TCMA, s->tcma); + } + if (s->mte_active[wunpriv]) { + wdesc = FIELD_DP32(wdesc, MTEDESC, TBI, s->tbid); + wdesc = FIELD_DP32(wdesc, MTEDESC, TCMA, s->tcma); + wdesc = FIELD_DP32(wdesc, MTEDESC, WRITE, true); + } + /* The helper function needs these parts of the descriptor regardless */ + rdesc = FIELD_DP32(rdesc, MTEDESC, MIDX, rmemidx); + wdesc = FIELD_DP32(wdesc, MTEDESC, MIDX, wmemidx); + + /* + * The helper needs the register numbers, but since they're in + * the syndrome anyway, we let it extract them from there rather + * than passing in an extra three integer arguments. + */ + fn(cpu_env, tcg_constant_i32(syndrome), tcg_constant_i32(wdesc), + tcg_constant_i32(rdesc)); + return true; +} + +TRANS_FEAT(CPYP, aa64_mops, do_CPY, a, false, gen_helper_cpyp) +TRANS_FEAT(CPYM, aa64_mops, do_CPY, a, false, gen_helper_cpym) +TRANS_FEAT(CPYE, aa64_mops, do_CPY, a, true, gen_helper_cpye) +TRANS_FEAT(CPYFP, aa64_mops, do_CPY, a, false, gen_helper_cpyfp) +TRANS_FEAT(CPYFM, aa64_mops, do_CPY, a, false, gen_helper_cpyfm) +TRANS_FEAT(CPYFE, aa64_mops, do_CPY, a, true, gen_helper_cpyfe) + typedef void ArithTwoOp(TCGv_i64, TCGv_i64, TCGv_i64); static bool gen_rri(DisasContext *s, arg_rri_sf *a, From patchwork Tue Sep 12 14:04:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Maydell X-Patchwork-Id: 721773 Delivered-To: patch@linaro.org Received: by 2002:adf:f64d:0:b0:31d:da82:a3b4 with SMTP id x13csp1665318wrp; Tue, 12 Sep 2023 07:07:25 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGBzLMPcQkgcFjDirtPKuW8ZvEDy0ElFXai8JrlckBZqZFnDZjwSLeivybH0yG3f7V2EI0R X-Received: by 2002:a05:6102:578d:b0:450:c857:c5c0 with SMTP id dh13-20020a056102578d00b00450c857c5c0mr4510894vsb.33.1694527645494; Tue, 12 Sep 2023 07:07:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1694527645; cv=none; d=google.com; s=arc-20160816; b=l0R1g6ASwBb08/5aijIy39489ADGtJ9RHzVTmkYotv+3cNG//4urgXLM51MS/jMS80 jhZvPphcBauK2M1NWg8HgAWXIxoW857bYCmc/VtihkQvHYtXNxn44+xEGRCTyLtKlWjh 0tG1bEgh1WYO25jx6qUJ/HG33MdCR5YiLKgPRSgdC6jJcIcFx08pl4vRNCRr7DXJWYha c+FwCVIDcE37JvI2mFCyOfQwhW/0REzLvQASk6vsOfq4kxC7OsITe36RpaFYmH020wy/ zJDdQZkmN0HH1JPD9t9G19USb+NB+R0mkAjsvTVdNShiva0YCT2+7/dRgo6GCu/ka4T/ u0gg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:to:from :dkim-signature; bh=NBpl5OQT8V8up5SSNxC1qRnSWwtVuHHFrqIRM+FlQ/o=; fh=H2AmuqulvQE+T5zu97MCEUC3z9wF9NssS7895NhR/+c=; b=EM3csmNmHxKohwAsgm1KnSN5e2C4lDumdAtEaLMu+yXXyIaWRup/ytPVD3BpGfc8iP PvqkP4ae7Isip8nhu0DXtjUynQjEuCYc3MePWMCR0TcKd6VsV3fUcxlDrRi7rKagzac6 ApojH2oY+N+A1jknSZkBF4FKJSrw6DA+7RIkVJiBWvf5Ox83CVtQBk5uXPcPDvB8WL7j ffSzFVPMUdPTw49I13nwq6Acn/H8IknM6mmDxxGZAP6d4JJ7A50Z+MFxj9nw3Qiv0BeI teD+BDPeX923DYsQ0aemr5usB/FFGqTuGFFOsHRRdlPQ+uEhviY0+vrK3MEMAscK/A3o yoQg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=wRRpQhVE; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id j9-20020a67fa49000000b004507723bb30si696439vsq.537.2023.09.12.07.07.23 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 12 Sep 2023 07:07:25 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=wRRpQhVE; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1qg413-0008Rx-0S; Tue, 12 Sep 2023 10:05:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1qg40r-0008GV-2c for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:10 -0400 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1qg40l-0003sA-G9 for qemu-devel@nongnu.org; Tue, 12 Sep 2023 10:05:08 -0400 Received: by mail-wm1-x32f.google.com with SMTP id 5b1f17b1804b1-403004a96eeso36435985e9.3 for ; Tue, 12 Sep 2023 07:04:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1694527489; x=1695132289; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:from:to:cc:subject:date:message-id :reply-to; bh=NBpl5OQT8V8up5SSNxC1qRnSWwtVuHHFrqIRM+FlQ/o=; b=wRRpQhVEsOwHrfcF90v6yf93EISi82i6/OnDn1k6ffrV97wOiOU62WrFRXtubclH4h bU0KySoxmYl3J6oJMoP0wXSlKfIyHbmZqqhUeZ4vO7DGU9Un6F2WK+oLB31bIs9LjhlO EjVyBw5wkxrJGwNKr+gWkrT6a8k+ruHDfVZgl1Mb8q/v5Du7fNIH+5SQ6ch6MbblDpwv JnkvQUgnEdcG+lTnPJ4pi7ntx99VjuSI0I6WlvJ77bhbrgkEY/cfdpxaZ9z0wJLoXUVQ twcyEhJV43H51jWtSAI2iXjP+6E8lKWvJEDwxFKx45XiMJrqRvqrD8NbQUe2UydE4+wk qhWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694527489; x=1695132289; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NBpl5OQT8V8up5SSNxC1qRnSWwtVuHHFrqIRM+FlQ/o=; b=LKxYAqautbwO5lhv9OmHFno+nArHfki1bWnpC49aGarYVGZVjCiIs0N7aMexLR7JdJ 31urdsJqWwh6xTr0eIVRJe6zRPacVPn1vXTWAq3vodWP9wUMrmskqGL1iIYyOxT0IRAY JosWm3ebT3q4dn69IwmO82n/AEAM1oTY4+skkVH29j+FGN7qAG8AmkmhnlXYwvMaP26X vnNVaYqS4Xj+nl6nRTyCDrmqDSyXACcl6aPA+HNAjQk4baX952GUQzvN1TEbkIc9ieRv MV5V26Joc6sAfkbJei/LHneDlvcrdWi0yf5idlcf7bQJ8ysN+1+4y1vinSiY6c3WB6QI uCiQ== X-Gm-Message-State: AOJu0Yx2DccJES+1kcV0pqe3hprzA6qky5LVqSe//Y65f+BzXTaZpb/H 1ODqQuXXaVqLAGlWFATgWRrvwQ== X-Received: by 2002:a05:600c:5120:b0:402:f517:9c07 with SMTP id o32-20020a05600c512000b00402f5179c07mr10297024wms.0.1694527489134; Tue, 12 Sep 2023 07:04:49 -0700 (PDT) Received: from orth.archaic.org.uk (orth.archaic.org.uk. [2001:8b0:1d0::2]) by smtp.gmail.com with ESMTPSA id r3-20020a5d4983000000b00317ab75748bsm12892672wrq.49.2023.09.12.07.04.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Sep 2023 07:04:48 -0700 (PDT) From: Peter Maydell To: qemu-arm@nongnu.org, qemu-devel@nongnu.org Subject: [PATCH v2 12/12] target/arm: Enable FEAT_MOPS for CPU 'max' Date: Tue, 12 Sep 2023 15:04:34 +0100 Message-Id: <20230912140434.1333369-13-peter.maydell@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230912140434.1333369-1-peter.maydell@linaro.org> References: <20230912140434.1333369-1-peter.maydell@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2a00:1450:4864:20::32f; envelope-from=peter.maydell@linaro.org; helo=mail-wm1-x32f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Enable FEAT_MOPS on the AArch64 'max' CPU, and add it to the list of features we implement. Signed-off-by: Peter Maydell Reviewed-by: Richard Henderson --- v2: Now sets the hwcap bit --- docs/system/arm/emulation.rst | 1 + linux-user/elfload.c | 1 + target/arm/tcg/cpu64.c | 1 + 3 files changed, 3 insertions(+) diff --git a/docs/system/arm/emulation.rst b/docs/system/arm/emulation.rst index 1fb6a2e8c3e..965cbf84c51 100644 --- a/docs/system/arm/emulation.rst +++ b/docs/system/arm/emulation.rst @@ -58,6 +58,7 @@ the following architecture extensions: - FEAT_LSE (Large System Extensions) - FEAT_LSE2 (Large System Extensions v2) - FEAT_LVA (Large Virtual Address space) +- FEAT_MOPS (Standardization of memory operations) - FEAT_MTE (Memory Tagging Extension) - FEAT_MTE2 (Memory Tagging Extension) - FEAT_MTE3 (MTE Asymmetric Fault Handling) diff --git a/linux-user/elfload.c b/linux-user/elfload.c index 203a2b790d5..db75cd4b33f 100644 --- a/linux-user/elfload.c +++ b/linux-user/elfload.c @@ -816,6 +816,7 @@ uint32_t get_elf_hwcap2(void) GET_FEATURE_ID(aa64_sme_i16i64, ARM_HWCAP2_A64_SME_I16I64); GET_FEATURE_ID(aa64_sme_fa64, ARM_HWCAP2_A64_SME_FA64); GET_FEATURE_ID(aa64_hbc, ARM_HWCAP2_A64_HBC); + GET_FEATURE_ID(aa64_mops, ARM_HWCAP2_A64_MOPS); return hwcaps; } diff --git a/target/arm/tcg/cpu64.c b/target/arm/tcg/cpu64.c index 57abaea00cd..68928e51272 100644 --- a/target/arm/tcg/cpu64.c +++ b/target/arm/tcg/cpu64.c @@ -1028,6 +1028,7 @@ void aarch64_max_tcg_initfn(Object *obj) cpu->isar.id_aa64isar1 = t; t = cpu->isar.id_aa64isar2; + t = FIELD_DP64(t, ID_AA64ISAR2, MOPS, 1); /* FEAT_MOPS */ t = FIELD_DP64(t, ID_AA64ISAR2, BC, 1); /* FEAT_HBC */ cpu->isar.id_aa64isar2 = t;