From patchwork Mon Jul 10 09:43:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 701322 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC536EB64D9 for ; Mon, 10 Jul 2023 09:49:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230091AbjGJJsq (ORCPT ); Mon, 10 Jul 2023 05:48:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35124 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233266AbjGJJsN (ORCPT ); Mon, 10 Jul 2023 05:48:13 -0400 Received: from mail-io1-xd35.google.com (mail-io1-xd35.google.com [IPv6:2607:f8b0:4864:20::d35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8521849C5 for ; Mon, 10 Jul 2023 02:43:46 -0700 (PDT) Received: by mail-io1-xd35.google.com with SMTP id ca18e2360f4ac-78372625badso205900439f.3 for ; Mon, 10 Jul 2023 02:43:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1688982225; x=1691574225; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vWvqokmp7r4qpLb1zZr/K+Grf5K67mplcBtMET9ZvCc=; b=GuLbKlzbbm4+IQ1CE2fG/Sh0nv+IuRFqdVmE9lHlCnzZvCYcJ+fNJwYSnPgIzs+2LV 0IumeDVWpjLHWIAwGn8ZjRZ0r1R1/9vJRUN9TaFiD3zMurfzY5FyomAvdQdAftRrvSlC 72NrgVLlgcFtJggqHFSs+M/7UafyA2H+VYl6h8Yf78Hd3q4y5NOQ7MzGdq0rbQXFNHYo 1NHVjpf9Dy3tZWu5EWaO88c/SG2Q7jQlqvSa6npE0kZuPIOt2Vl03C8cwLz1ojq3wVsL lBJfnTyUc+N4gS4/WzoxhxqCjVAfxLl19R5srAdH//IlLNZqP53SsgxbqhNDM/8ZvGcJ eWrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688982225; x=1691574225; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=vWvqokmp7r4qpLb1zZr/K+Grf5K67mplcBtMET9ZvCc=; b=Kg6jODYCyyusmFgmhMZi2mF78oXBMLYwGg0gxY/YrAIJeKxGMnWQiATHyrPSvNptHx npnRmpMGOv73olcXcWhalWVwPGy5tVVwKHpjqnlaYvR5tyBLbcF5rzXly9YWXcPQgdsp bvok4z9tYtg8ORlY0TjAxlMCYWj8bu2HCiTJTFoNoksjVFyl2ObrTFYQ41VRA6d6DvlN //mJZN7kFqUE8fb3GtA3FCjC8+26XMX1Ed1uaafokHxQsS/ARd7zX/CuH5qZKD5TvfPk 0DYXtoLuGYrQyqt8FJc7txsVR7NaiArcnp6AW+ONi05GBvXzfur7KD3S+74pG+xnA4tt lXCw== X-Gm-Message-State: ABy/qLbqbvAba9c8FsjM6fk/W1QCHWj0Fh0+dAfqItOYfSDb5X/U6bzc 1TgERu+ITpkD73wYGmmGMiAQpg== X-Google-Smtp-Source: APBJJlFEyLkIuL8Wq3Jqu7Z6swRF/VIvMb80OEnwDuIryYk6JE6AjqCkbB4cESzshC7MW2kIN8b3BA== X-Received: by 2002:a5e:df01:0:b0:786:fff8:13c2 with SMTP id f1-20020a5edf01000000b00786fff813c2mr5491462ioq.11.1688982225538; Mon, 10 Jul 2023 02:43:45 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id k1-20020a6b7a41000000b007870c56387dsm936938iop.49.2023.07.10.02.43.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 02:43:45 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Andrew Jones , Sunil V L , Conor Dooley , Saravana Kannan , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v5 1/9] RISC-V: Add riscv_fw_parent_hartid() function Date: Mon, 10 Jul 2023 15:13:13 +0530 Message-Id: <20230710094321.1378351-2-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230710094321.1378351-1-apatel@ventanamicro.com> References: <20230710094321.1378351-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org We add common riscv_fw_parent_hartid() which help device drivers to get parent hartid of the INTC (i.e. local interrupt controller) fwnode. This should work for both DT and ACPI. Signed-off-by: Anup Patel --- arch/riscv/include/asm/processor.h | 3 +++ arch/riscv/kernel/cpu.c | 16 ++++++++++++++++ 2 files changed, 19 insertions(+) diff --git a/arch/riscv/include/asm/processor.h b/arch/riscv/include/asm/processor.h index c950a8d9edef..39dc23a18f88 100644 --- a/arch/riscv/include/asm/processor.h +++ b/arch/riscv/include/asm/processor.h @@ -81,6 +81,9 @@ int riscv_of_processor_hartid(struct device_node *node, unsigned long *hartid); int riscv_early_of_processor_hartid(struct device_node *node, unsigned long *hartid); int riscv_of_parent_hartid(struct device_node *node, unsigned long *hartid); +struct fwnode_handle; +int riscv_fw_parent_hartid(struct fwnode_handle *node, unsigned long *hartid); + extern void riscv_fill_hwcap(void); extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); diff --git a/arch/riscv/kernel/cpu.c b/arch/riscv/kernel/cpu.c index a2fc952318e9..9be9b3b1f333 100644 --- a/arch/riscv/kernel/cpu.c +++ b/arch/riscv/kernel/cpu.c @@ -96,6 +96,22 @@ int riscv_of_parent_hartid(struct device_node *node, unsigned long *hartid) return -1; } +/* Find hart ID of the CPU fwnode under which given fwnode falls. */ +int riscv_fw_parent_hartid(struct fwnode_handle *node, unsigned long *hartid) +{ + int rc; + u64 temp; + + if (!is_of_node(node)) { + rc = fwnode_property_read_u64_array(node, "hartid", &temp, 1); + if (!rc) + *hartid = temp; + } else + rc = riscv_of_parent_hartid(to_of_node(node), hartid); + + return rc; +} + DEFINE_PER_CPU(struct riscv_cpuinfo, riscv_cpuinfo); unsigned long riscv_cached_mvendorid(unsigned int cpu_id) From patchwork Mon Jul 10 09:43:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 701321 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 152FAEB64D9 for ; Mon, 10 Jul 2023 09:49:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232942AbjGJJtP (ORCPT ); Mon, 10 Jul 2023 05:49:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233364AbjGJJsX (ORCPT ); Mon, 10 Jul 2023 05:48:23 -0400 Received: from mail-io1-xd2a.google.com (mail-io1-xd2a.google.com [IPv6:2607:f8b0:4864:20::d2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 079131BC9 for ; Mon, 10 Jul 2023 02:43:53 -0700 (PDT) Received: by mail-io1-xd2a.google.com with SMTP id ca18e2360f4ac-785cbc5bfd2so212760939f.2 for ; Mon, 10 Jul 2023 02:43:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1688982232; x=1691574232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XXSsvlQ/R6yHM0Hz/nbqBTRJ+migXFNIKunDjIZh+EE=; b=W/of7M624lty1wnoYq2AEd8m6M/KJ2u+uFQsUmtTpS50lkX1mksFm1FF5vcO1pWyZj xyZSuo6XsQozWpsz8wvW2dGylYfwwi6Xxx/2iqxH+rhf3Na64EivLBq50b6jQmgOsEcW PpSKzmM3OKO+rcyZHvfD4RlVGPHN94F59v5oBbcRv381JFtmEyzQ5Xij4MBPhGIuMDjW tReZsODNXa3NudsL5yhXPFj9NjjOyJcql86MO0x13wu/D/Z//ItyNqxqhqPMlr2O7t/M XBnWb9YMUym97gSAJFEzaU8PVfVSRHbXzlDk9spe5Gk604RvBZUToV/TtMQrRSROw0M7 FFUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688982232; x=1691574232; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XXSsvlQ/R6yHM0Hz/nbqBTRJ+migXFNIKunDjIZh+EE=; b=I+9F2Or4dRqLW+KpuoMc0HJQqglPgBXBrPTPS++EJ0qhPp2rBIkMoTQ8cAuiDgLPuD +leqfa6gSr/E+9M9/v6X3DKXW0K4CtNt984LJdd7OsFGxGZovISEjdL21arAhZNaGpr5 ZLYsfVexJFuLBtbIgA4pUFNq7JbqIL+0hhfP/eZyO2wGRJx30eWhuvzH8wdIwhv+CPVw uclZx8J7NbkqC0hHueax2w+P5+EfgAsaMWwabvxhM8T8zXmpcHO5HAufJdNEuk6V0Cuu jhAbwkPfEShLkkpklvZdpa9zSXbMr725uoOVRSXFNERMFlyUP2UnABraGWwGFbofyHeM twGw== X-Gm-Message-State: ABy/qLbah7Y5sFF1gJ9Mie9POwgjj9i4VNewVLQV+ArfjVJ12GV/9+L2 Ej+SutsSgjOVwFjf5OBOtEmaOA== X-Google-Smtp-Source: APBJJlF+I80RYaN2WMHLcsN/9T5/HGm+CHwRryl4eQj0HMo93xFCdJg54YKQO8Y2dqiU0aMU4FHSkg== X-Received: by 2002:a6b:7904:0:b0:786:cc36:360c with SMTP id i4-20020a6b7904000000b00786cc36360cmr10770193iop.8.1688982232246; Mon, 10 Jul 2023 02:43:52 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id k1-20020a6b7a41000000b007870c56387dsm936938iop.49.2023.07.10.02.43.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 02:43:51 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Andrew Jones , Sunil V L , Conor Dooley , Saravana Kannan , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v5 2/9] irqchip/riscv-intc: Add support for RISC-V AIA Date: Mon, 10 Jul 2023 15:13:14 +0530 Message-Id: <20230710094321.1378351-3-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230710094321.1378351-1-apatel@ventanamicro.com> References: <20230710094321.1378351-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The RISC-V advanced interrupt architecture (AIA) extends the per-HART local interrupts in following ways: 1. Minimum 64 local interrupts for both RV32 and RV64 2. Ability to process multiple pending local interrupts in same interrupt handler 3. Priority configuration for each local interrupts 4. Special CSRs to configure/access the per-HART MSI controller This patch adds support for RISC-V AIA in the RISC-V intc driver. Signed-off-by: Anup Patel --- drivers/irqchip/irq-riscv-intc.c | 36 ++++++++++++++++++++++++++------ 1 file changed, 30 insertions(+), 6 deletions(-) diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c index 4adeee1bc391..e235bf1708a4 100644 --- a/drivers/irqchip/irq-riscv-intc.c +++ b/drivers/irqchip/irq-riscv-intc.c @@ -17,6 +17,7 @@ #include #include #include +#include static struct irq_domain *intc_domain; @@ -30,6 +31,15 @@ static asmlinkage void riscv_intc_irq(struct pt_regs *regs) generic_handle_domain_irq(intc_domain, cause); } +static asmlinkage void riscv_intc_aia_irq(struct pt_regs *regs) +{ + unsigned long topi; + + while ((topi = csr_read(CSR_TOPI))) + generic_handle_domain_irq(intc_domain, + topi >> TOPI_IID_SHIFT); +} + /* * On RISC-V systems local interrupts are masked or unmasked by writing * the SIE (Supervisor Interrupt Enable) CSR. As CSRs can only be written @@ -39,12 +49,18 @@ static asmlinkage void riscv_intc_irq(struct pt_regs *regs) static void riscv_intc_irq_mask(struct irq_data *d) { - csr_clear(CSR_IE, BIT(d->hwirq)); + if (d->hwirq < BITS_PER_LONG) + csr_clear(CSR_IE, BIT(d->hwirq)); + else + csr_clear(CSR_IEH, BIT(d->hwirq - BITS_PER_LONG)); } static void riscv_intc_irq_unmask(struct irq_data *d) { - csr_set(CSR_IE, BIT(d->hwirq)); + if (d->hwirq < BITS_PER_LONG) + csr_set(CSR_IE, BIT(d->hwirq)); + else + csr_set(CSR_IEH, BIT(d->hwirq - BITS_PER_LONG)); } static void riscv_intc_irq_eoi(struct irq_data *d) @@ -115,16 +131,22 @@ static struct fwnode_handle *riscv_intc_hwnode(void) static int __init riscv_intc_init_common(struct fwnode_handle *fn) { - int rc; + int rc, nr_irqs = BITS_PER_LONG; + + if (riscv_isa_extension_available(NULL, SxAIA) && BITS_PER_LONG == 32) + nr_irqs = nr_irqs * 2; - intc_domain = irq_domain_create_linear(fn, BITS_PER_LONG, + intc_domain = irq_domain_create_linear(fn, nr_irqs, &riscv_intc_domain_ops, NULL); if (!intc_domain) { pr_err("unable to add IRQ domain\n"); return -ENXIO; } - rc = set_handle_irq(&riscv_intc_irq); + if (riscv_isa_extension_available(NULL, SxAIA)) + rc = set_handle_irq(&riscv_intc_aia_irq); + else + rc = set_handle_irq(&riscv_intc_irq); if (rc) { pr_err("failed to set irq handler\n"); return rc; @@ -132,7 +154,9 @@ static int __init riscv_intc_init_common(struct fwnode_handle *fn) riscv_set_intc_hwnode_fn(riscv_intc_hwnode); - pr_info("%d local interrupts mapped\n", BITS_PER_LONG); + pr_info("%d local interrupts mapped%s\n", + nr_irqs, (riscv_isa_extension_available(NULL, SxAIA)) ? + " using AIA" : ""); return 0; } From patchwork Mon Jul 10 09:43:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 701320 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26AF9EB64DA for ; Mon, 10 Jul 2023 09:49:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231163AbjGJJtv (ORCPT ); Mon, 10 Jul 2023 05:49:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231159AbjGJJtF (ORCPT ); Mon, 10 Jul 2023 05:49:05 -0400 Received: from mail-io1-xd2b.google.com (mail-io1-xd2b.google.com [IPv6:2607:f8b0:4864:20::d2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19BFA359E for ; Mon, 10 Jul 2023 02:44:07 -0700 (PDT) Received: by mail-io1-xd2b.google.com with SMTP id ca18e2360f4ac-7869bcee569so134805539f.0 for ; Mon, 10 Jul 2023 02:44:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1688982246; x=1691574246; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VK4e+V4vtotlBruhjBgVPjOaSI+fYO4FrSXADWL6uGU=; b=EadLVPVpyykncOyKSTnaqpaqmTeJ4w/KNwNiEDJw32Mhlu0rMHYY0N0+3Avlq/NEoO 6AtyZhkVArF9Nmxu24eoKXXnN84PjkoemgEM9+ZI5zMWzpqnf/fZxhC7Eays/rNLoAci U9ij8JDFVaHjtLJ3bsjd7MuzKjr2ZSV+tHrn2FbpJwf73yHWHQ3kA+RsRS7D7Uau3uNB YnFKl8Maoco6e1NxBM1X26jWABlYZ/+zsRUm5aFhd8gVU/sEPdnj2Je3ehJAGHtxsd1+ CFI7RZw7IfiovK2cElrXMeMTkRKySA74PaS9GNEv0TLFU44EDgcmHNfk68RyLuEbGGuw w+Dw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688982246; x=1691574246; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VK4e+V4vtotlBruhjBgVPjOaSI+fYO4FrSXADWL6uGU=; b=k0EOeNVaC+Ub3oMiGy8rU16kcHu37HFfGFqhlV938TJ1JJbcaYBxxm3L6v5ImYI6pk wVg9OISvEvr8tZ1kC09ZKMaKYblYSpTzGFxWKg9P198zFzXCQB/H3GNDNRDy6fOWteQp x4lFoZQDICaer0z9jNlzHKmXALMQiSWb+BaciOE+rCHbH+/JpkIjErARW+SYza5k5sdy izkDlkF0LSrgJu0a4hTibyKrvWIphLOWhJcPJxTjWtLR518nkTzR9AlnhGEaqp96qRMj O/COhEW5rGD6dmNeBGVIb6btNBUg8sdhz0m9+eUCrGnUvs9/XYMIQBmuYMYH4B+PRKaK UyAw== X-Gm-Message-State: ABy/qLaKfwWo7I/sO0Qj7gyAVaSwDiI8j0CJ7nL1Jtba4bhI5EPcXxPX B17ZwepOonpDXsTraMz9daTS5Q== X-Google-Smtp-Source: APBJJlEvKShfocTP7FkIoPS9rjUXIi2jcNLRmr/kWQlTBjWCEZ9iv4lCidmtlUhgP2X9rfusVrUfsg== X-Received: by 2002:a5e:c10d:0:b0:783:6512:5099 with SMTP id v13-20020a5ec10d000000b0078365125099mr11756680iol.5.1688982246109; Mon, 10 Jul 2023 02:44:06 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id k1-20020a6b7a41000000b007870c56387dsm936938iop.49.2023.07.10.02.43.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 02:44:05 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Andrew Jones , Sunil V L , Conor Dooley , Saravana Kannan , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel Subject: [PATCH v5 4/9] irqchip: Add RISC-V incoming MSI controller driver Date: Mon, 10 Jul 2023 15:13:16 +0530 Message-Id: <20230710094321.1378351-5-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230710094321.1378351-1-apatel@ventanamicro.com> References: <20230710094321.1378351-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The RISC-V advanced interrupt architecture (AIA) specification defines a new MSI controller for managing MSIs and IPIs on a RISC-V platform. This new MSI controller is referred to as incoming message signalled interrupt controller (IMSIC) which manages MSI on per-HART (or per-CPU) basis. (For more details refer https://github.com/riscv/riscv-aia) This patch adds an irqchip driver for RISC-V IMSIC which provides IPIs and platform MSIs to the Linux RISC-V kernel. Signed-off-by: Anup Patel --- drivers/irqchip/Kconfig | 7 +- drivers/irqchip/Makefile | 1 + drivers/irqchip/irq-riscv-imsic.c | 1011 +++++++++++++++++++++++++++ include/linux/irqchip/riscv-imsic.h | 86 +++ 4 files changed, 1104 insertions(+), 1 deletion(-) create mode 100644 drivers/irqchip/irq-riscv-imsic.c create mode 100644 include/linux/irqchip/riscv-imsic.h diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig index 09e422da482f..8ef18be5f37b 100644 --- a/drivers/irqchip/Kconfig +++ b/drivers/irqchip/Kconfig @@ -30,7 +30,6 @@ config ARM_GIC_V2M config GIC_NON_BANKED bool - config ARM_GIC_V3 bool select IRQ_DOMAIN_HIERARCHY @@ -545,6 +544,12 @@ config SIFIVE_PLIC select IRQ_DOMAIN_HIERARCHY select GENERIC_IRQ_EFFECTIVE_AFF_MASK if SMP +config RISCV_IMSIC + bool + depends on RISCV + select IRQ_DOMAIN_HIERARCHY + select GENERIC_MSI_IRQ + config EXYNOS_IRQ_COMBINER bool "Samsung Exynos IRQ combiner support" if COMPILE_TEST depends on (ARCH_EXYNOS && ARM) || COMPILE_TEST diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile index ffd945fe71aa..577bde3e986b 100644 --- a/drivers/irqchip/Makefile +++ b/drivers/irqchip/Makefile @@ -95,6 +95,7 @@ obj-$(CONFIG_QCOM_MPM) += irq-qcom-mpm.o obj-$(CONFIG_CSKY_MPINTC) += irq-csky-mpintc.o obj-$(CONFIG_CSKY_APB_INTC) += irq-csky-apb-intc.o obj-$(CONFIG_RISCV_INTC) += irq-riscv-intc.o +obj-$(CONFIG_RISCV_IMSIC) += irq-riscv-imsic.o obj-$(CONFIG_SIFIVE_PLIC) += irq-sifive-plic.o obj-$(CONFIG_IMX_IRQSTEER) += irq-imx-irqsteer.o obj-$(CONFIG_IMX_INTMUX) += irq-imx-intmux.o diff --git a/drivers/irqchip/irq-riscv-imsic.c b/drivers/irqchip/irq-riscv-imsic.c new file mode 100644 index 000000000000..ceb5e0fc883c --- /dev/null +++ b/drivers/irqchip/irq-riscv-imsic.c @@ -0,0 +1,1011 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + */ + +#define pr_fmt(fmt) "riscv-imsic: " fmt +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define IMSIC_DISABLE_EIDELIVERY 0 +#define IMSIC_ENABLE_EIDELIVERY 1 +#define IMSIC_DISABLE_EITHRESHOLD 1 +#define IMSIC_ENABLE_EITHRESHOLD 0 + +/* + * The IMSIC driver uses 1 IPI for ID synchronization and + * arch/riscv/kernel/smp.c require 6 IPIs so we fix the + * total number of IPIs to 8. + */ +#define IMSIC_NR_IPI 8 + +#define imsic_csr_write(__c, __v) \ +do { \ + csr_write(CSR_ISELECT, __c); \ + csr_write(CSR_IREG, __v); \ +} while (0) + +#define imsic_csr_read(__c) \ +({ \ + unsigned long __v; \ + csr_write(CSR_ISELECT, __c); \ + __v = csr_read(CSR_IREG); \ + __v; \ +}) + +#define imsic_csr_set(__c, __v) \ +do { \ + csr_write(CSR_ISELECT, __c); \ + csr_set(CSR_IREG, __v); \ +} while (0) + +#define imsic_csr_clear(__c, __v) \ +do { \ + csr_write(CSR_ISELECT, __c); \ + csr_clear(CSR_IREG, __v); \ +} while (0) + +struct imsic_priv { + /* Global configuration common for all HARTs */ + struct imsic_global_config global; + + /* Global state of interrupt identities */ + raw_spinlock_t ids_lock; + unsigned long *ids_used_bimap; + unsigned long *ids_enabled_bimap; + unsigned int *ids_target_cpu; + + /* Mask for connected CPUs */ + struct cpumask lmask; + + /* IPI interrupt identity and synchronization */ + u32 ipi_id; + int ipi_virq; + struct irq_desc *ipi_lsync_desc; + + /* IRQ domains */ + struct irq_domain *base_domain; + struct irq_domain *plat_domain; +}; + +static struct imsic_priv *imsic; +static int imsic_parent_irq; + +const struct imsic_global_config *imsic_get_global_config(void) +{ + return (imsic) ? &imsic->global : NULL; +} +EXPORT_SYMBOL_GPL(imsic_get_global_config); + +static int imsic_cpu_page_phys(unsigned int cpu, + unsigned int guest_index, + phys_addr_t *out_msi_pa) +{ + struct imsic_global_config *global; + struct imsic_local_config *local; + + global = &imsic->global; + local = per_cpu_ptr(global->local, cpu); + + if (BIT(global->guest_index_bits) <= guest_index) + return -EINVAL; + + if (out_msi_pa) + *out_msi_pa = local->msi_pa + + (guest_index * IMSIC_MMIO_PAGE_SZ); + + return 0; +} + +static int imsic_get_cpu(const struct cpumask *mask_val, bool force, + unsigned int *out_target_cpu) +{ + struct cpumask amask; + unsigned int cpu; + + cpumask_and(&amask, &imsic->lmask, mask_val); + + if (force) + cpu = cpumask_first(&amask); + else + cpu = cpumask_any_and(&amask, cpu_online_mask); + + if (cpu >= nr_cpu_ids) + return -EINVAL; + + if (out_target_cpu) + *out_target_cpu = cpu; + + return 0; +} + +static void imsic_id_set_target(unsigned int id, unsigned int target_cpu) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&imsic->ids_lock, flags); + imsic->ids_target_cpu[id] = target_cpu; + raw_spin_unlock_irqrestore(&imsic->ids_lock, flags); +} + +static unsigned int imsic_id_get_target(unsigned int id) +{ + unsigned int ret; + unsigned long flags; + + raw_spin_lock_irqsave(&imsic->ids_lock, flags); + ret = imsic->ids_target_cpu[id]; + raw_spin_unlock_irqrestore(&imsic->ids_lock, flags); + + return ret; +} + +static void __imsic_eix_update(unsigned long base_id, + unsigned long num_id, bool pend, bool val) +{ + unsigned long i, isel, ireg; + unsigned long id = base_id, last_id = base_id + num_id; + + while (id < last_id) { + isel = id / BITS_PER_LONG; + isel *= BITS_PER_LONG / IMSIC_EIPx_BITS; + isel += (pend) ? IMSIC_EIP0 : IMSIC_EIE0; + + ireg = 0; + for (i = id & (__riscv_xlen - 1); + (id < last_id) && (i < __riscv_xlen); i++) { + ireg |= BIT(i); + id++; + } + + /* + * The IMSIC EIEx and EIPx registers are indirectly + * accessed via using ISELECT and IREG CSRs so we + * need to access these CSRs without getting preempted. + * + * All existing users of this function call this + * function with local IRQs disabled so we don't + * need to do anything special here. + */ + if (val) + imsic_csr_set(isel, ireg); + else + imsic_csr_clear(isel, ireg); + } +} + +#define __imsic_id_enable(__id) \ + __imsic_eix_update((__id), 1, false, true) +#define __imsic_id_disable(__id) \ + __imsic_eix_update((__id), 1, false, false) + +static void imsic_ids_local_sync(void) +{ + int i; + unsigned long flags; + + raw_spin_lock_irqsave(&imsic->ids_lock, flags); + for (i = 1; i <= imsic->global.nr_ids; i++) { + if (imsic->ipi_id == i) + continue; + + if (test_bit(i, imsic->ids_enabled_bimap)) + __imsic_id_enable(i); + else + __imsic_id_disable(i); + } + raw_spin_unlock_irqrestore(&imsic->ids_lock, flags); +} + +static void imsic_ids_local_delivery(bool enable) +{ + if (enable) { + imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_ENABLE_EITHRESHOLD); + imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_ENABLE_EIDELIVERY); + } else { + imsic_csr_write(IMSIC_EIDELIVERY, IMSIC_DISABLE_EIDELIVERY); + imsic_csr_write(IMSIC_EITHRESHOLD, IMSIC_DISABLE_EITHRESHOLD); + } +} + +#ifdef CONFIG_SMP +static irqreturn_t imsic_ids_sync_handler(int irq, void *data) +{ + imsic_ids_local_sync(); + return IRQ_HANDLED; +} + +static void imsic_ids_remote_sync(void) +{ + struct cpumask amask; + + /* + * We simply inject ID synchronization IPI to all target CPUs + * except current CPU. The ipi_send_mask() implementation of + * IPI mux will inject ID synchronization IPI only for CPUs + * that have enabled it so offline CPUs won't receive IPI. + * An offline CPU will unconditionally synchronize IDs through + * imsic_starting_cpu() when the CPU is brought up. + */ + cpumask_andnot(&amask, &imsic->lmask, cpumask_of(smp_processor_id())); + __ipi_send_mask(imsic->ipi_lsync_desc, &amask); +} +#else +#define imsic_ids_remote_sync() +#endif + +static int imsic_ids_alloc(unsigned int order) +{ + int ret; + unsigned long flags; + + raw_spin_lock_irqsave(&imsic->ids_lock, flags); + ret = bitmap_find_free_region(imsic->ids_used_bimap, + imsic->global.nr_ids + 1, order); + raw_spin_unlock_irqrestore(&imsic->ids_lock, flags); + + return ret; +} + +static void imsic_ids_free(unsigned int base_id, unsigned int order) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&imsic->ids_lock, flags); + bitmap_release_region(imsic->ids_used_bimap, base_id, order); + raw_spin_unlock_irqrestore(&imsic->ids_lock, flags); +} + +static int __init imsic_ids_init(void) +{ + int i; + struct imsic_global_config *global = &imsic->global; + + raw_spin_lock_init(&imsic->ids_lock); + + /* Allocate used bitmap */ + imsic->ids_used_bimap = bitmap_zalloc(global->nr_ids + 1, GFP_KERNEL); + if (!imsic->ids_used_bimap) + return -ENOMEM; + + /* Allocate enabled bitmap */ + imsic->ids_enabled_bimap = bitmap_zalloc(global->nr_ids + 1, + GFP_KERNEL); + if (!imsic->ids_enabled_bimap) { + kfree(imsic->ids_used_bimap); + return -ENOMEM; + } + + /* Allocate target CPU array */ + imsic->ids_target_cpu = kcalloc(global->nr_ids + 1, + sizeof(unsigned int), GFP_KERNEL); + if (!imsic->ids_target_cpu) { + bitmap_free(imsic->ids_enabled_bimap); + bitmap_free(imsic->ids_used_bimap); + return -ENOMEM; + } + for (i = 0; i <= global->nr_ids; i++) + imsic->ids_target_cpu[i] = UINT_MAX; + + /* Reserve ID#0 because it is special and never implemented */ + bitmap_set(imsic->ids_used_bimap, 0, 1); + + return 0; +} + +static void __init imsic_ids_cleanup(void) +{ + kfree(imsic->ids_target_cpu); + bitmap_free(imsic->ids_enabled_bimap); + bitmap_free(imsic->ids_used_bimap); +} + +#ifdef CONFIG_SMP +static void imsic_ipi_send(unsigned int cpu) +{ + struct imsic_local_config *local = + per_cpu_ptr(imsic->global.local, cpu); + + writel(imsic->ipi_id, local->msi_va); +} + +static void imsic_ipi_starting_cpu(void) +{ + /* Enable IPIs for current CPU. */ + __imsic_id_enable(imsic->ipi_id); + + /* Enable virtual IPI used for IMSIC ID synchronization */ + enable_percpu_irq(imsic->ipi_virq, 0); +} + +static void imsic_ipi_dying_cpu(void) +{ + /* + * Disable virtual IPI used for IMSIC ID synchronization so + * that we don't receive ID synchronization requests. + */ + disable_percpu_irq(imsic->ipi_virq); +} + +static int __init imsic_ipi_domain_init(void) +{ + int virq; + + /* Allocate interrupt identity for IPIs */ + virq = imsic_ids_alloc(get_count_order(1)); + if (virq < 0) + return virq; + imsic->ipi_id = virq; + + /* Create IMSIC IPI multiplexing */ + virq = ipi_mux_create(IMSIC_NR_IPI, imsic_ipi_send); + if (virq <= 0) { + imsic_ids_free(imsic->ipi_id, get_count_order(1)); + return (virq < 0) ? virq : -ENOMEM; + } + imsic->ipi_virq = virq; + + /* First vIRQ is used for IMSIC ID synchronization */ + virq = request_percpu_irq(imsic->ipi_virq, imsic_ids_sync_handler, + "riscv-imsic-lsync", imsic->global.local); + if (virq) { + imsic_ids_free(imsic->ipi_id, get_count_order(1)); + return virq; + } + irq_set_status_flags(imsic->ipi_virq, IRQ_HIDDEN); + imsic->ipi_lsync_desc = irq_to_desc(imsic->ipi_virq); + + /* Set vIRQ range */ + riscv_ipi_set_virq_range(imsic->ipi_virq + 1, IMSIC_NR_IPI - 1, true); + + return 0; +} + +static void __init imsic_ipi_domain_cleanup(void) +{ + if (imsic->ipi_lsync_desc) + free_percpu_irq(imsic->ipi_virq, imsic->global.local); + imsic_ids_free(imsic->ipi_id, get_count_order(1)); +} +#else +static void imsic_ipi_starting_cpu(void) +{ +} + +static void imsic_ipi_dying_cpu(void) +{ +} + +static int __init imsic_ipi_domain_init(void) +{ + /* Clear the IPI id because we are not using IPIs */ + imsic->ipi_id = 0; + return 0; +} + +static void __init imsic_ipi_domain_cleanup(void) +{ +} +#endif + +static void imsic_irq_mask(struct irq_data *d) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&imsic->ids_lock, flags); + bitmap_clear(imsic->ids_enabled_bimap, d->hwirq, 1); + __imsic_id_disable(d->hwirq); + raw_spin_unlock_irqrestore(&imsic->ids_lock, flags); + + imsic_ids_remote_sync(); +} + +static void imsic_irq_unmask(struct irq_data *d) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&imsic->ids_lock, flags); + bitmap_set(imsic->ids_enabled_bimap, d->hwirq, 1); + __imsic_id_enable(d->hwirq); + raw_spin_unlock_irqrestore(&imsic->ids_lock, flags); + + imsic_ids_remote_sync(); +} + +static void imsic_irq_compose_msi_msg(struct irq_data *d, + struct msi_msg *msg) +{ + phys_addr_t msi_addr; + unsigned int cpu; + int err; + + cpu = imsic_id_get_target(d->hwirq); + if (WARN_ON(cpu == UINT_MAX)) + return; + + err = imsic_cpu_page_phys(cpu, 0, &msi_addr); + if (WARN_ON(err)) + return; + + msg->address_hi = upper_32_bits(msi_addr); + msg->address_lo = lower_32_bits(msi_addr); + msg->data = d->hwirq; +} + +#ifdef CONFIG_SMP +static int imsic_irq_set_affinity(struct irq_data *d, + const struct cpumask *mask_val, + bool force) +{ + unsigned int target_cpu; + int rc; + + rc = imsic_get_cpu(mask_val, force, &target_cpu); + if (rc) + return rc; + + imsic_id_set_target(d->hwirq, target_cpu); + irq_data_update_effective_affinity(d, cpumask_of(target_cpu)); + + return IRQ_SET_MASK_OK; +} +#endif + +static struct irq_chip imsic_irq_base_chip = { + .name = "RISC-V IMSIC-BASE", + .irq_mask = imsic_irq_mask, + .irq_unmask = imsic_irq_unmask, +#ifdef CONFIG_SMP + .irq_set_affinity = imsic_irq_set_affinity, +#endif + .irq_compose_msi_msg = imsic_irq_compose_msi_msg, + .flags = IRQCHIP_SKIP_SET_WAKE | + IRQCHIP_MASK_ON_SUSPEND, +}; + +static int imsic_irq_domain_alloc(struct irq_domain *domain, + unsigned int virq, + unsigned int nr_irqs, + void *args) +{ + int i, hwirq, err = 0; + unsigned int cpu; + + err = imsic_get_cpu(&imsic->lmask, false, &cpu); + if (err) + return err; + + hwirq = imsic_ids_alloc(get_count_order(nr_irqs)); + if (hwirq < 0) + return hwirq; + + for (i = 0; i < nr_irqs; i++) { + imsic_id_set_target(hwirq + i, cpu); + irq_domain_set_info(domain, virq + i, hwirq + i, + &imsic_irq_base_chip, imsic, + handle_simple_irq, NULL, NULL); + irq_set_noprobe(virq + i); + irq_set_affinity(virq + i, &imsic->lmask); + /* + * IMSIC does not implement irq_disable() so Linux interrupt + * subsystem will take a lazy approach for disabling an IMSIC + * interrupt. This means IMSIC interrupts are left unmasked + * upon system suspend and interrupts are not processed + * immediately upon system wake up. To tackle this, we disable + * the lazy approach for all IMSIC interrupts. + */ + irq_set_status_flags(virq + i, IRQ_DISABLE_UNLAZY); + } + + return 0; +} + +static void imsic_irq_domain_free(struct irq_domain *domain, + unsigned int virq, + unsigned int nr_irqs) +{ + struct irq_data *d = irq_domain_get_irq_data(domain, virq); + + imsic_ids_free(d->hwirq, get_count_order(nr_irqs)); + irq_domain_free_irqs_parent(domain, virq, nr_irqs); +} + +static const struct irq_domain_ops imsic_base_domain_ops = { + .alloc = imsic_irq_domain_alloc, + .free = imsic_irq_domain_free, +}; + +static struct irq_chip imsic_plat_irq_chip = { + .name = "RISC-V IMSIC-PLAT", +}; + +static struct msi_domain_ops imsic_plat_domain_ops = { +}; + +static struct msi_domain_info imsic_plat_domain_info = { + .flags = (MSI_FLAG_USE_DEF_DOM_OPS | MSI_FLAG_USE_DEF_CHIP_OPS), + .ops = &imsic_plat_domain_ops, + .chip = &imsic_plat_irq_chip, +}; + +static int __init imsic_irq_domains_init(struct fwnode_handle *fwnode) +{ + /* Create Base IRQ domain */ + imsic->base_domain = irq_domain_create_tree(fwnode, + &imsic_base_domain_ops, imsic); + if (!imsic->base_domain) { + pr_err("Failed to create IMSIC base domain\n"); + return -ENOMEM; + } + irq_domain_update_bus_token(imsic->base_domain, DOMAIN_BUS_NEXUS); + + /* Create Platform MSI domain */ + imsic->plat_domain = platform_msi_create_irq_domain(fwnode, + &imsic_plat_domain_info, + imsic->base_domain); + if (!imsic->plat_domain) { + pr_err("Failed to create IMSIC platform domain\n"); + irq_domain_remove(imsic->base_domain); + return -ENOMEM; + } + + return 0; +} + +/* + * To handle an interrupt, we read the TOPEI CSR and write zero in one + * instruction. If TOPEI CSR is non-zero then we translate TOPEI.ID to + * Linux interrupt number and let Linux IRQ subsystem handle it. + */ +static void imsic_handle_irq(struct irq_desc *desc) +{ + struct irq_chip *chip = irq_desc_get_chip(desc); + irq_hw_number_t hwirq; + int err; + + chained_irq_enter(chip, desc); + + while ((hwirq = csr_swap(CSR_TOPEI, 0))) { + hwirq = hwirq >> TOPEI_ID_SHIFT; + + if (hwirq == imsic->ipi_id) { +#ifdef CONFIG_SMP + ipi_mux_process(); +#endif + continue; + } + + err = generic_handle_domain_irq(imsic->base_domain, hwirq); + if (unlikely(err)) + pr_warn_ratelimited( + "hwirq %lu mapping not found\n", hwirq); + } + + chained_irq_exit(chip, desc); +} + +static int imsic_starting_cpu(unsigned int cpu) +{ + /* Enable per-CPU parent interrupt */ + enable_percpu_irq(imsic_parent_irq, + irq_get_trigger_type(imsic_parent_irq)); + + /* Setup IPIs */ + imsic_ipi_starting_cpu(); + + /* + * Interrupts identities might have been enabled/disabled while + * this CPU was not running so sync-up local enable/disable state. + */ + imsic_ids_local_sync(); + + /* Enable local interrupt delivery */ + imsic_ids_local_delivery(true); + + return 0; +} + +static int imsic_dying_cpu(unsigned int cpu) +{ + /* Cleanup IPIs */ + imsic_ipi_dying_cpu(); + + return 0; +} + +static int __init imsic_get_parent_hartid(struct fwnode_handle *fwnode, + u32 index, unsigned long *hartid) +{ + int rc; + struct fwnode_reference_args parent; + + rc = fwnode_property_get_reference_args(fwnode, + "interrupts-extended", "#interrupt-cells", + 0, index, &parent); + if (rc) + return rc; + + /* + * Skip interrupts other than external interrupts for + * current privilege level. + */ + if (parent.args[0] != RV_IRQ_EXT) + return -EINVAL; + + return riscv_fw_parent_hartid(parent.fwnode, hartid); +} + +static int __init imsic_get_mmio_resource(struct fwnode_handle *fwnode, + u32 index, struct resource *res) +{ + /* + * Currently, only OF fwnode is support so extend this function + * for other types of fwnode for ACPI support. + */ + if (!is_of_node(fwnode)) + return -EINVAL; + return of_address_to_resource(to_of_node(fwnode), index, res); +} + +static int __init imsic_init(struct fwnode_handle *fwnode) +{ + int rc, cpu; + phys_addr_t base_addr; + struct irq_domain *domain; + void __iomem **mmios_va = NULL; + struct resource res, *mmios = NULL; + struct imsic_local_config *local; + struct imsic_global_config *global; + unsigned long reloff, hartid; + u32 i, j, index, nr_parent_irqs, nr_handlers = 0, num_mmios = 0; + + /* + * Only one IMSIC instance allowed in a platform for clean + * implementation of SMP IRQ affinity and per-CPU IPIs. + * + * This means on a multi-socket (or multi-die) platform we + * will have multiple MMIO regions for one IMSIC instance. + */ + if (imsic) { + pr_err("%pfwP: already initialized hence ignoring\n", + fwnode); + return -ENODEV; + } + + if (!riscv_isa_extension_available(NULL, SxAIA)) { + pr_err("%pfwP: AIA support not available\n", fwnode); + return -ENODEV; + } + + imsic = kzalloc(sizeof(*imsic), GFP_KERNEL); + if (!imsic) + return -ENOMEM; + global = &imsic->global; + + global->local = alloc_percpu(typeof(*(global->local))); + if (!global->local) { + rc = -ENOMEM; + goto out_free_priv; + } + + /* Find number of parent interrupts */ + nr_parent_irqs = 0; + while (!imsic_get_parent_hartid(fwnode, nr_parent_irqs, &hartid)) + nr_parent_irqs++; + if (!nr_parent_irqs) { + pr_err("%pfwP: no parent irqs available\n", fwnode); + rc = -EINVAL; + goto out_free_local; + } + + /* Find number of guest index bits in MSI address */ + rc = fwnode_property_read_u32_array(fwnode, "riscv,guest-index-bits", + &global->guest_index_bits, 1); + if (rc) + global->guest_index_bits = 0; + i = BITS_PER_LONG - IMSIC_MMIO_PAGE_SHIFT; + if (i < global->guest_index_bits) { + pr_err("%pfwP: guest index bits too big\n", fwnode); + rc = -EINVAL; + goto out_free_local; + } + + /* Find number of HART index bits */ + rc = fwnode_property_read_u32_array(fwnode, "riscv,hart-index-bits", + &global->hart_index_bits, 1); + if (rc) { + /* Assume default value */ + global->hart_index_bits = __fls(nr_parent_irqs); + if (BIT(global->hart_index_bits) < nr_parent_irqs) + global->hart_index_bits++; + } + i = BITS_PER_LONG - IMSIC_MMIO_PAGE_SHIFT - global->guest_index_bits; + if (i < global->hart_index_bits) { + pr_err("%pfwP: HART index bits too big\n", fwnode); + rc = -EINVAL; + goto out_free_local; + } + + /* Find number of group index bits */ + rc = fwnode_property_read_u32_array(fwnode, "riscv,group-index-bits", + &global->group_index_bits, 1); + if (rc) + global->group_index_bits = 0; + i = BITS_PER_LONG - IMSIC_MMIO_PAGE_SHIFT - + global->guest_index_bits - global->hart_index_bits; + if (i < global->group_index_bits) { + pr_err("%pfwP: group index bits too big\n", fwnode); + rc = -EINVAL; + goto out_free_local; + } + + /* + * Find first bit position of group index. + * If not specified assumed the default APLIC-IMSIC configuration. + */ + rc = fwnode_property_read_u32_array(fwnode, "riscv,group-index-shift", + &global->group_index_shift, 1); + if (rc) + global->group_index_shift = IMSIC_MMIO_PAGE_SHIFT * 2; + i = global->group_index_bits + global->group_index_shift - 1; + if (i >= BITS_PER_LONG) { + pr_err("%pfwP: group index shift too big\n", fwnode); + rc = -EINVAL; + goto out_free_local; + } + + /* Find number of interrupt identities */ + rc = fwnode_property_read_u32_array(fwnode, "riscv,num-ids", + &global->nr_ids, 1); + if (rc) { + pr_err("%pfwP: number of interrupt identities not found\n", + fwnode); + goto out_free_local; + } + if ((global->nr_ids < IMSIC_MIN_ID) || + (global->nr_ids >= IMSIC_MAX_ID) || + ((global->nr_ids & IMSIC_MIN_ID) != IMSIC_MIN_ID)) { + pr_err("%pfwP: invalid number of interrupt identities\n", + fwnode); + rc = -EINVAL; + goto out_free_local; + } + + /* Find number of guest interrupt identities */ + if (fwnode_property_read_u32_array(fwnode, "riscv,num-guest-ids", + &global->nr_guest_ids, 1)) + global->nr_guest_ids = global->nr_ids; + if ((global->nr_guest_ids < IMSIC_MIN_ID) || + (global->nr_guest_ids >= IMSIC_MAX_ID) || + ((global->nr_guest_ids & IMSIC_MIN_ID) != IMSIC_MIN_ID)) { + pr_err("%pfwP: invalid number of guest interrupt identities\n", + fwnode); + rc = -EINVAL; + goto out_free_local; + } + + /* Compute base address */ + rc = imsic_get_mmio_resource(fwnode, 0, &res); + if (rc) { + pr_err("%pfwP: first MMIO resource not found\n", fwnode); + rc = -EINVAL; + goto out_free_local; + } + global->base_addr = res.start; + global->base_addr &= ~(BIT(global->guest_index_bits + + global->hart_index_bits + + IMSIC_MMIO_PAGE_SHIFT) - 1); + global->base_addr &= ~((BIT(global->group_index_bits) - 1) << + global->group_index_shift); + + /* Find number of MMIO register sets */ + while (!imsic_get_mmio_resource(fwnode, num_mmios, &res)) + num_mmios++; + + /* Allocate MMIO resource array */ + mmios = kcalloc(num_mmios, sizeof(*mmios), GFP_KERNEL); + if (!mmios) { + rc = -ENOMEM; + goto out_free_local; + } + + /* Allocate MMIO virtual address array */ + mmios_va = kcalloc(num_mmios, sizeof(*mmios_va), GFP_KERNEL); + if (!mmios_va) { + rc = -ENOMEM; + goto out_iounmap; + } + + /* Parse and map MMIO register sets */ + for (i = 0; i < num_mmios; i++) { + rc = imsic_get_mmio_resource(fwnode, i, &mmios[i]); + if (rc) { + pr_err("%pfwP: unable to parse MMIO regset %d\n", + fwnode, i); + goto out_iounmap; + } + + base_addr = mmios[i].start; + base_addr &= ~(BIT(global->guest_index_bits + + global->hart_index_bits + + IMSIC_MMIO_PAGE_SHIFT) - 1); + base_addr &= ~((BIT(global->group_index_bits) - 1) << + global->group_index_shift); + if (base_addr != global->base_addr) { + rc = -EINVAL; + pr_err("%pfwP: address mismatch for regset %d\n", + fwnode, i); + goto out_iounmap; + } + + mmios_va[i] = ioremap(mmios[i].start, resource_size(&mmios[i])); + if (!mmios_va[i]) { + rc = -EIO; + pr_err("%pfwP: unable to map MMIO regset %d\n", + fwnode, i); + goto out_iounmap; + } + } + + /* Initialize interrupt identity management */ + rc = imsic_ids_init(); + if (rc) { + pr_err("%pfwP: failed to initialize interrupt management\n", + fwnode); + goto out_iounmap; + } + + /* Configure handlers for target CPUs */ + for (i = 0; i < nr_parent_irqs; i++) { + rc = imsic_get_parent_hartid(fwnode, i, &hartid); + if (rc) { + pr_warn("%pfwP: hart ID for parent irq%d not found\n", + fwnode, i); + continue; + } + + cpu = riscv_hartid_to_cpuid(hartid); + if (cpu < 0) { + pr_warn("%pfwP: invalid cpuid for parent irq%d\n", + fwnode, i); + continue; + } + + /* Find MMIO location of MSI page */ + index = num_mmios; + reloff = i * BIT(global->guest_index_bits) * + IMSIC_MMIO_PAGE_SZ; + for (j = 0; num_mmios; j++) { + if (reloff < resource_size(&mmios[j])) { + index = j; + break; + } + + /* + * MMIO region size may not be aligned to + * BIT(global->guest_index_bits) * IMSIC_MMIO_PAGE_SZ + * if holes are present. + */ + reloff -= ALIGN(resource_size(&mmios[j]), + BIT(global->guest_index_bits) * IMSIC_MMIO_PAGE_SZ); + } + if (index >= num_mmios) { + pr_warn("%pfwP: MMIO not found for parent irq%d\n", + fwnode, i); + continue; + } + + cpumask_set_cpu(cpu, &imsic->lmask); + + local = per_cpu_ptr(global->local, cpu); + local->msi_pa = mmios[index].start + reloff; + local->msi_va = mmios_va[index] + reloff; + + nr_handlers++; + } + + /* If no CPU handlers found then can't take interrupts */ + if (!nr_handlers) { + pr_err("%pfwP: No CPU handlers found\n", fwnode); + rc = -ENODEV; + goto out_ids_cleanup; + } + + /* Find parent domain and register chained handler */ + domain = irq_find_matching_fwnode(riscv_get_intc_hwnode(), + DOMAIN_BUS_ANY); + if (!domain) { + pr_err("%pfwP: Failed to find INTC domain\n", fwnode); + rc = -ENOENT; + goto out_ids_cleanup; + } + imsic_parent_irq = irq_create_mapping(domain, RV_IRQ_EXT); + if (!imsic_parent_irq) { + pr_err("%pfwP: Failed to create INTC mapping\n", fwnode); + rc = -ENOENT; + goto out_ids_cleanup; + } + irq_set_chained_handler(imsic_parent_irq, imsic_handle_irq); + + /* Initialize IPI domain */ + rc = imsic_ipi_domain_init(); + if (rc) { + pr_err("%pfwP: Failed to initialize IPI domain\n", fwnode); + goto out_ids_cleanup; + } + + /* Initialize IRQ and MSI domains */ + rc = imsic_irq_domains_init(fwnode); + if (rc) { + pr_err("%pfwP: Failed to initialize IRQ and MSI domains\n", + fwnode); + goto out_ipi_domain_cleanup; + } + + /* + * Setup cpuhp state (must be done after setting imsic_parent_irq) + * + * Don't disable per-CPU IMSIC file when CPU goes offline + * because this affects IPI and the masking/unmasking of + * virtual IPIs is done via generic IPI-Mux + */ + cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, + "irqchip/riscv/imsic:starting", + imsic_starting_cpu, imsic_dying_cpu); + + /* We don't need MMIO arrays anymore so let's free-up */ + kfree(mmios_va); + kfree(mmios); + + pr_info("%pfwP: hart-index-bits: %d, guest-index-bits: %d\n", + fwnode, global->hart_index_bits, global->guest_index_bits); + pr_info("%pfwP: group-index-bits: %d, group-index-shift: %d\n", + fwnode, global->group_index_bits, global->group_index_shift); + pr_info("%pfwP: mapped %d interrupts for %d CPUs at %pa\n", + fwnode, global->nr_ids, nr_handlers, &global->base_addr); + if (imsic->ipi_id) + pr_info("%pfwP: providing IPIs using interrupt %d\n", + fwnode, imsic->ipi_id); + + return 0; + +out_ipi_domain_cleanup: + imsic_ipi_domain_cleanup(); +out_ids_cleanup: + imsic_ids_cleanup(); +out_iounmap: + for (i = 0; i < num_mmios; i++) { + if (mmios_va[i]) + iounmap(mmios_va[i]); + } + kfree(mmios_va); + kfree(mmios); +out_free_local: + free_percpu(imsic->global.local); +out_free_priv: + kfree(imsic); + imsic = NULL; + return rc; +} + +static int __init imsic_dt_init(struct device_node *node, + struct device_node *parent) +{ + return imsic_init(&node->fwnode); +} +IRQCHIP_DECLARE(riscv_imsic, "riscv,imsics", imsic_dt_init); diff --git a/include/linux/irqchip/riscv-imsic.h b/include/linux/irqchip/riscv-imsic.h new file mode 100644 index 000000000000..1f6fc9a57218 --- /dev/null +++ b/include/linux/irqchip/riscv-imsic.h @@ -0,0 +1,86 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2021 Western Digital Corporation or its affiliates. + * Copyright (C) 2022 Ventana Micro Systems Inc. + */ +#ifndef __LINUX_IRQCHIP_RISCV_IMSIC_H +#define __LINUX_IRQCHIP_RISCV_IMSIC_H + +#include +#include + +#define IMSIC_MMIO_PAGE_SHIFT 12 +#define IMSIC_MMIO_PAGE_SZ (1UL << IMSIC_MMIO_PAGE_SHIFT) +#define IMSIC_MMIO_PAGE_LE 0x00 +#define IMSIC_MMIO_PAGE_BE 0x04 + +#define IMSIC_MIN_ID 63 +#define IMSIC_MAX_ID 2048 + +#define IMSIC_EIDELIVERY 0x70 + +#define IMSIC_EITHRESHOLD 0x72 + +#define IMSIC_EIP0 0x80 +#define IMSIC_EIP63 0xbf +#define IMSIC_EIPx_BITS 32 + +#define IMSIC_EIE0 0xc0 +#define IMSIC_EIE63 0xff +#define IMSIC_EIEx_BITS 32 + +#define IMSIC_FIRST IMSIC_EIDELIVERY +#define IMSIC_LAST IMSIC_EIE63 + +#define IMSIC_MMIO_SETIPNUM_LE 0x00 +#define IMSIC_MMIO_SETIPNUM_BE 0x04 + +struct imsic_local_config { + phys_addr_t msi_pa; + void __iomem *msi_va; +}; + +struct imsic_global_config { + /* + * MSI Target Address Scheme + * + * XLEN-1 12 0 + * | | | + * ------------------------------------------------------------- + * |xxxxxx|Group Index|xxxxxxxxxxx|HART Index|Guest Index| 0 | + * ------------------------------------------------------------- + */ + + /* Bits representing Guest index, HART index, and Group index */ + u32 guest_index_bits; + u32 hart_index_bits; + u32 group_index_bits; + u32 group_index_shift; + + /* Global base address matching all target MSI addresses */ + phys_addr_t base_addr; + + /* Number of interrupt identities */ + u32 nr_ids; + + /* Number of guest interrupt identities */ + u32 nr_guest_ids; + + /* Per-CPU IMSIC addresses */ + struct imsic_local_config __percpu *local; +}; + +#ifdef CONFIG_RISCV_IMSIC + +extern const struct imsic_global_config *imsic_get_global_config(void); + +#else + +static inline const struct imsic_global_config *imsic_get_global_config(void) +{ + return NULL; +} + +#endif + +#endif From patchwork Mon Jul 10 09:43:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 701319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D39DAEB64DA for ; Mon, 10 Jul 2023 09:50:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232759AbjGJJuS (ORCPT ); Mon, 10 Jul 2023 05:50:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232819AbjGJJtM (ORCPT ); Mon, 10 Jul 2023 05:49:12 -0400 Received: from mail-io1-xd2d.google.com (mail-io1-xd2d.google.com [IPv6:2607:f8b0:4864:20::d2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7DADB3A8D for ; Mon, 10 Jul 2023 02:44:20 -0700 (PDT) Received: by mail-io1-xd2d.google.com with SMTP id ca18e2360f4ac-7836272f36eso124855339f.1 for ; Mon, 10 Jul 2023 02:44:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1688982260; x=1691574260; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TYQsgV6mmNiNxm7fkk00HaM0IkP9BeYIjrigTOut9Eg=; b=Alm0P0BKQG28JWKQqB96wR3+nDnx+opYYKVvPBXpjltCEH2yUMKoR7sPDMR6JtVHcp ToIFZRvr2EOnfqB7dBD0K1V/JjKX0RjjoCBPhmRj37jxvse4B/S+gNLRw0Qc/mf6cvfg 8dgC3/k09lEsIuicWH9GqoycaX+Rz6nsyK8YCBEPe4sfb1S3/0VWEb/3JfO9uVdCRYk1 rW84bZi9kn7p4PGc7ejIhgvUDAZHeXZ5JaQTa2zmNpTQ6nfnfO3L32c402/FAki4z54o HglJqwF1TXESo67xoRvW5PR4tjajVHwus2j5Nond1iPEFB/qiL8d+AtCdKHS+97W3gcE mEnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688982260; x=1691574260; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TYQsgV6mmNiNxm7fkk00HaM0IkP9BeYIjrigTOut9Eg=; b=RbBV1/s6mSGueXYFJtAm5RlDTu9/FBkpMWiustnTiF7vxQm5bc+kDZURfVx0EWmvi4 JGzEdygc+IVuzAVXFFfKJgEONBylD1lrv5u3oZKX/uDn5YUvgnDEfvgEuqjsWUqY/ohr mJEYB1FbOdudlT7P8+UB8LNhE09rHUCYGGbKYDT2qMYSzYbxW5YTESt1ScnH9sE7Zz1+ RX19+QXipFa/wPTLkYOPq3fbFlwe+QIaGjndVl3NV8sOCHsVO87Zu4M5psTctZwwWFqD pVBBk6wRlcchX7cB3yLPV/RSXgW4DbQ2bhYmh3ils7DYZkEDUeRf7ErkbvBEz4es4FiQ oNLg== X-Gm-Message-State: ABy/qLbQ0ejK27W+6LmyPrEJ/47rcqWxj0cW9ZcXxFBdSKLmEPoczKub 1ICNVEv2lVpV4PIi3OScvHCuRQ== X-Google-Smtp-Source: APBJJlEKv+tZQCJx10Nuy+y7g0TCsUF40nq3T/jkdfIx2Z54QFsfl2Yl/7lHUAS6PHzPGIv1cVrPhA== X-Received: by 2002:a6b:f719:0:b0:783:6906:a333 with SMTP id k25-20020a6bf719000000b007836906a333mr11008994iog.17.1688982259653; Mon, 10 Jul 2023 02:44:19 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id k1-20020a6b7a41000000b007870c56387dsm936938iop.49.2023.07.10.02.44.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 02:44:19 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Andrew Jones , Sunil V L , Conor Dooley , Saravana Kannan , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel , Conor Dooley Subject: [PATCH v5 6/9] dt-bindings: interrupt-controller: Add RISC-V advanced PLIC Date: Mon, 10 Jul 2023 15:13:18 +0530 Message-Id: <20230710094321.1378351-7-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230710094321.1378351-1-apatel@ventanamicro.com> References: <20230710094321.1378351-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org We add DT bindings document for RISC-V advanced platform level interrupt controller (APLIC) defined by the RISC-V advanced interrupt architecture (AIA) specification. Signed-off-by: Anup Patel Reviewed-by: Conor Dooley --- .../interrupt-controller/riscv,aplic.yaml | 172 ++++++++++++++++++ 1 file changed, 172 insertions(+) create mode 100644 Documentation/devicetree/bindings/interrupt-controller/riscv,aplic.yaml diff --git a/Documentation/devicetree/bindings/interrupt-controller/riscv,aplic.yaml b/Documentation/devicetree/bindings/interrupt-controller/riscv,aplic.yaml new file mode 100644 index 000000000000..190a6499c932 --- /dev/null +++ b/Documentation/devicetree/bindings/interrupt-controller/riscv,aplic.yaml @@ -0,0 +1,172 @@ +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) +%YAML 1.2 +--- +$id: http://devicetree.org/schemas/interrupt-controller/riscv,aplic.yaml# +$schema: http://devicetree.org/meta-schemas/core.yaml# + +title: RISC-V Advanced Platform Level Interrupt Controller (APLIC) + +maintainers: + - Anup Patel + +description: + The RISC-V advanced interrupt architecture (AIA) defines an advanced + platform level interrupt controller (APLIC) for handling wired interrupts + in a RISC-V platform. The RISC-V AIA specification can be found at + https://github.com/riscv/riscv-aia. + + The RISC-V APLIC is implemented as hierarchical APLIC domains where all + interrupt sources connect to the root APLIC domain and a parent APLIC + domain can delegate interrupt sources to it's child APLIC domains. There + is one device tree node for each APLIC domain. + +allOf: + - $ref: /schemas/interrupt-controller.yaml# + +properties: + compatible: + items: + - enum: + - qemu,aplic + - const: riscv,aplic + + reg: + maxItems: 1 + + interrupt-controller: true + + "#interrupt-cells": + const: 2 + + interrupts-extended: + minItems: 1 + maxItems: 16384 + description: + Given APLIC domain directly injects external interrupts to a set of + RISC-V HARTS (or CPUs). Each node pointed to should be a riscv,cpu-intc + node, which has a CPU node (i.e. RISC-V HART) as parent. + + msi-parent: + description: + Given APLIC domain forwards wired interrupts as MSIs to a AIA incoming + message signaled interrupt controller (IMSIC). If both "msi-parent" and + "interrupts-extended" properties are present then it means the APLIC + domain supports both MSI mode and Direct mode in HW. In this case, the + APLIC driver has to choose between MSI mode or Direct mode. + + riscv,num-sources: + $ref: /schemas/types.yaml#/definitions/uint32 + minimum: 1 + maximum: 1023 + description: + Specifies the number of wired interrupt sources supported by this + APLIC domain. + + riscv,children: + $ref: /schemas/types.yaml#/definitions/phandle-array + minItems: 1 + maxItems: 1024 + items: + maxItems: 1 + description: + A list of child APLIC domains for the given APLIC domain. Each child + APLIC domain is assigned a child index in increasing order, with the + first child APLIC domain assigned child index 0. The APLIC domain child + index is used by firmware to delegate interrupts from the given APLIC + domain to a particular child APLIC domain. + + riscv,delegation: + $ref: /schemas/types.yaml#/definitions/phandle-array + minItems: 1 + maxItems: 1024 + items: + items: + - description: child APLIC domain phandle + - description: first interrupt number of the parent APLIC domain (inclusive) + - description: last interrupt number of the parent APLIC domain (inclusive) + description: + A interrupt delegation list where each entry is a triple consisting + of child APLIC domain phandle, first interrupt number of the parent + APLIC domain, and last interrupt number of the parent APLIC domain. + Firmware must configure interrupt delegation registers based on + interrupt delegation list. + +dependencies: + riscv,delegation: [ "riscv,children" ] + +required: + - compatible + - reg + - interrupt-controller + - "#interrupt-cells" + - riscv,num-sources + +anyOf: + - required: + - interrupts-extended + - required: + - msi-parent + +unevaluatedProperties: false + +examples: + - | + // Example 1 (APLIC domains directly injecting interrupt to HARTs): + + interrupt-controller@c000000 { + compatible = "qemu,aplic", "riscv,aplic"; + interrupts-extended = <&cpu1_intc 11>, + <&cpu2_intc 11>, + <&cpu3_intc 11>, + <&cpu4_intc 11>; + reg = <0xc000000 0x4080>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + riscv,children = <&aplic1>, <&aplic2>; + riscv,delegation = <&aplic1 1 63>; + }; + + aplic1: interrupt-controller@d000000 { + compatible = "qemu,aplic", "riscv,aplic"; + interrupts-extended = <&cpu1_intc 9>, + <&cpu2_intc 9>; + reg = <0xd000000 0x4080>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + }; + + aplic2: interrupt-controller@e000000 { + compatible = "qemu,aplic", "riscv,aplic"; + interrupts-extended = <&cpu3_intc 9>, + <&cpu4_intc 9>; + reg = <0xe000000 0x4080>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + }; + + - | + // Example 2 (APLIC domains forwarding interrupts as MSIs): + + interrupt-controller@c000000 { + compatible = "qemu,aplic", "riscv,aplic"; + msi-parent = <&imsic_mlevel>; + reg = <0xc000000 0x4000>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + riscv,children = <&aplic3>; + riscv,delegation = <&aplic3 1 63>; + }; + + aplic3: interrupt-controller@d000000 { + compatible = "qemu,aplic", "riscv,aplic"; + msi-parent = <&imsic_slevel>; + reg = <0xd000000 0x4000>; + interrupt-controller; + #interrupt-cells = <2>; + riscv,num-sources = <63>; + }; +... From patchwork Mon Jul 10 09:43:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anup Patel X-Patchwork-Id: 701318 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CF2EEB64D9 for ; Mon, 10 Jul 2023 09:50:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231454AbjGJJuk (ORCPT ); Mon, 10 Jul 2023 05:50:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231520AbjGJJtw (ORCPT ); Mon, 10 Jul 2023 05:49:52 -0400 Received: from mail-io1-xd2b.google.com (mail-io1-xd2b.google.com [IPv6:2607:f8b0:4864:20::d2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B91803AA1 for ; Mon, 10 Jul 2023 02:44:34 -0700 (PDT) Received: by mail-io1-xd2b.google.com with SMTP id ca18e2360f4ac-76c64da0e46so131495839f.0 for ; Mon, 10 Jul 2023 02:44:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ventanamicro.com; s=google; t=1688982274; x=1691574274; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nImulFWj8VTnTfSKoHJB+uK3WiURTzzmP3Vx/VbcYKk=; b=VTXy68xec4bhmW9pDxJRs7Lxt4uPbC7gF4LMj48mQK3wzfGRHYRnzhEHcFqOuN0a2Q 8D2W0e4Muoehc49zlwPInLGIs+XsV4/kiCUXtwsP1YDKg20B7wWOfsa9dEGXl4+Bq7C0 NhT93td4VQlVckeX/H97oSdJA0ZOupz6weZwf4kDr0xjcrsaNcOxU6QK+4JEgxLqxMU5 auXrDbsIrE0Jr+veAXz37+wx5eNLlgg9+Kj9rVlJ525lyXnji+xE4Cfbvf6PrmZxJi7P 1Mot6oS4PHEWsZeObMD6K8UXixOEB/DyGZr70ywvLq4PHsZ2LIrZkVA15LRUzY+Mw8kX h1yQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688982274; x=1691574274; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nImulFWj8VTnTfSKoHJB+uK3WiURTzzmP3Vx/VbcYKk=; b=jvjdg8iCCMf+UhZbidxeBJJ/6s4Bak3Ms0RzqaMgSBVQCgn6ZIGPJlDIae45DTM35p h2seh/m31yU80sOctTX2fUr0+5SoqKqHaI0ddpw7RLpe44CN06M/as4Q6y6gWpDB9XL+ lvIg4N+QK85URWfEzvmcKMqT23w5o0BzRV3HTtxh5qL5VX8W0Gn6yR6IkhKHN/UkKkM8 K3AKODfHPhvRWUm98BooDqm18IX6C6DGtAE53prTxv25dLYsLPykKrRLwYtBoYqWckmo R1zwUXGsSKqahkykCR2FUApWaXuvUsk7nTTr+IJwHXdvIYr1haquvNj1o1uNJxo9ttvO ROXA== X-Gm-Message-State: ABy/qLbUf2kcLQ0j2/oz1dPuYxdT2xgRvGQaQ5u6MLhRgXxm5LWitrr6 Z98pnHkvCbAPfyunIFTc9pO+qA== X-Google-Smtp-Source: APBJJlEmg7w6H1qqHwnL/IRaN/E1TKRmVu1FkXayf8dTKGHPqVsNKfhgmgebsAz2pwfoUW45zxPWcw== X-Received: by 2002:a6b:fd0c:0:b0:783:62d0:88c with SMTP id c12-20020a6bfd0c000000b0078362d0088cmr11520021ioi.19.1688982274020; Mon, 10 Jul 2023 02:44:34 -0700 (PDT) Received: from anup-ubuntu-vm.localdomain ([103.97.165.210]) by smtp.gmail.com with ESMTPSA id k1-20020a6b7a41000000b007870c56387dsm936938iop.49.2023.07.10.02.44.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 10 Jul 2023 02:44:33 -0700 (PDT) From: Anup Patel To: Palmer Dabbelt , Paul Walmsley , Thomas Gleixner , Marc Zyngier , Rob Herring , Krzysztof Kozlowski Cc: Atish Patra , Andrew Jones , Sunil V L , Conor Dooley , Saravana Kannan , Anup Patel , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, Anup Patel , Conor Dooley Subject: [PATCH v5 8/9] RISC-V: Select APLIC and IMSIC drivers Date: Mon, 10 Jul 2023 15:13:20 +0530 Message-Id: <20230710094321.1378351-9-apatel@ventanamicro.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230710094321.1378351-1-apatel@ventanamicro.com> References: <20230710094321.1378351-1-apatel@ventanamicro.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The QEMU virt machine supports AIA emulation and we also have quite a few RISC-V platforms with AIA support under development so let us select APLIC and IMSIC drivers for all RISC-V platforms. Signed-off-by: Anup Patel Reviewed-by: Conor Dooley --- arch/riscv/Kconfig | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 4c07b9189c86..318f62a0a187 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -148,6 +148,8 @@ config RISCV select PCI_DOMAINS_GENERIC if PCI select PCI_MSI if PCI select RISCV_ALTERNATIVE if !XIP_KERNEL + select RISCV_APLIC + select RISCV_IMSIC select RISCV_INTC select RISCV_TIMER if RISCV_SBI select SIFIVE_PLIC