From patchwork Sat Mar 25 16:18:01 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sumit Semwal X-Patchwork-Id: 95978 Delivered-To: patch@linaro.org Received: by 10.140.89.233 with SMTP id v96csp532077qgd; Sat, 25 Mar 2017 09:19:00 -0700 (PDT) X-Received: by 10.98.61.5 with SMTP id k5mr16132713pfa.229.1490458740618; Sat, 25 Mar 2017 09:19:00 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r3si6723034pli.62.2017.03.25.09.18.59; Sat, 25 Mar 2017 09:19:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751302AbdCYQS7 (ORCPT + 5 others); Sat, 25 Mar 2017 12:18:59 -0400 Received: from mail-pf0-f170.google.com ([209.85.192.170]:35213 "EHLO mail-pf0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751277AbdCYQS7 (ORCPT ); Sat, 25 Mar 2017 12:18:59 -0400 Received: by mail-pf0-f170.google.com with SMTP id 20so7964926pfk.2 for ; Sat, 25 Mar 2017 09:18:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+Ja33mHEH06j9qL8t4Toqcgu3xamT3bzrV/0zV7Z1X4=; b=kwIfeEt2aUEsLY4+nm8gCVMFxlARk+jjo5ACqaa1ti3OZE2bQRlZ4xPZkaLwEzLLB5 Yu9sTK2EDMq61tKJaLWrl3hKh5MNzL4JA7QlMHFl4fnVNb1oq36QOA9J7AKghFOmRKqO 0j0eYgVFDg8CUCWvcOpnsADOAEgjgUl4DGGb0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+Ja33mHEH06j9qL8t4Toqcgu3xamT3bzrV/0zV7Z1X4=; b=YYzbLlW1DIrXmX4Au5FveY6X+9+CXEkm+YHjgvK8S3oUuN2PentDmji8UsYyL0guFV e1brY+eN0IKVhLkmKEg3c1b1xOsl+UkZT4XH1shYH9F8PKplGyJCH+BAxDAxieD7hbZK 1eJ8zVZQ4Sro0ulPu+RGJVUFVkoY4iKgSMM96lTeXaVcCaEbYAjzNr8/3zGXg+Ys476R /jNgr7KMt2MYH5nLlpZtQxiKPUzmCeTWGzpc0n4znfcNAewN1HHOAhEtGPkA+k1Y4dBI dTeTdrpN5kzWwIgKqEzwopUnEfFR0hRmH5NnvOFClj0LL8bTfmJmqqCDias3xGcsahLI gNeA== X-Gm-Message-State: AFeK/H0/uPn6KeJClTXBYTv65agb7PXr0UBSCJ0xxriZWShs6xH6KekPs2NbeddkfFHSPYje X-Received: by 10.99.101.67 with SMTP id z64mr15570375pgb.78.1490458737322; Sat, 25 Mar 2017 09:18:57 -0700 (PDT) Received: from phantom.lan ([106.51.225.38]) by smtp.gmail.com with ESMTPSA id q194sm11469541pfq.43.2017.03.25.09.18.54 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sat, 25 Mar 2017 09:18:56 -0700 (PDT) From: Sumit Semwal To: stable@vger.kernel.org Cc: Dan Streetman , Dan Streetman , Boris Ostrovsky , Sasha Levin , Greg Kroah-Hartman , Sumit Semwal Subject: [PATCH for-4.4 01/19] xen: do not re-use pirq number cached in pci device msi msg data Date: Sat, 25 Mar 2017 21:48:01 +0530 Message-Id: <1490458699-24484-2-git-send-email-sumit.semwal@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1490458699-24484-1-git-send-email-sumit.semwal@linaro.org> References: <1490458699-24484-1-git-send-email-sumit.semwal@linaro.org> Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Dan Streetman [ Upstream commit c74fd80f2f41d05f350bb478151021f88551afe8 ] Revert the main part of commit: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests") That commit introduced reading the pci device's msi message data to see if a pirq was previously configured for the device's msi/msix, and re-use that pirq. At the time, that was the correct behavior. However, a later change to Qemu caused it to call into the Xen hypervisor to unmap all pirqs for a pci device, when the pci device disables its MSI/MSIX vectors; specifically the Qemu commit: c976437c7dba9c7444fb41df45468968aaa326ad ("qemu-xen: free all the pirqs for msi/msix when driver unload") Once Qemu added this pirq unmapping, it was no longer correct for the kernel to re-use the pirq number cached in the pci device msi message data. All Qemu releases since 2.1.0 contain the patch that unmaps the pirqs when the pci device disables its MSI/MSIX vectors. This bug is causing failures to initialize multiple NVMe controllers under Xen, because the NVMe driver sets up a single MSIX vector for each controller (concurrently), and then after using that to talk to the controller for some configuration data, it disables the single MSIX vector and re-configures all the MSIX vectors it needs. So the MSIX setup code tries to re-use the cached pirq from the first vector for each controller, but the hypervisor has already given away that pirq to another controller, and its initialization fails. This is discussed in more detail at: https://lists.xen.org/archives/html/xen-devel/2017-01/msg00447.html Fixes: af42b8d12f8a ("xen: fix MSI setup and teardown for PV on HVM guests") Signed-off-by: Dan Streetman Reviewed-by: Stefano Stabellini Acked-by: Konrad Rzeszutek Wilk Signed-off-by: Boris Ostrovsky Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman Signed-off-by: Sumit Semwal --- arch/x86/pci/xen.c | 23 +++++++---------------- 1 file changed, 7 insertions(+), 16 deletions(-) -- 2.7.4 diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c index c6d6efe..7575f07 100644 --- a/arch/x86/pci/xen.c +++ b/arch/x86/pci/xen.c @@ -231,23 +231,14 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type) return 1; for_each_pci_msi_entry(msidesc, dev) { - __pci_read_msi_msg(msidesc, &msg); - pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) | - ((msg.address_lo >> MSI_ADDR_DEST_ID_SHIFT) & 0xff); - if (msg.data != XEN_PIRQ_MSI_DATA || - xen_irq_from_pirq(pirq) < 0) { - pirq = xen_allocate_pirq_msi(dev, msidesc); - if (pirq < 0) { - irq = -ENODEV; - goto error; - } - xen_msi_compose_msg(dev, pirq, &msg); - __pci_write_msi_msg(msidesc, &msg); - dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq); - } else { - dev_dbg(&dev->dev, - "xen: msi already bound to pirq=%d\n", pirq); + pirq = xen_allocate_pirq_msi(dev, msidesc); + if (pirq < 0) { + irq = -ENODEV; + goto error; } + xen_msi_compose_msg(dev, pirq, &msg); + __pci_write_msi_msg(msidesc, &msg); + dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq); irq = xen_bind_pirq_msi_to_irq(dev, msidesc, pirq, (type == PCI_CAP_ID_MSI) ? nvec : 1, (type == PCI_CAP_ID_MSIX) ?