From patchwork Wed Dec 13 00:36:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Mammedov X-Patchwork-Id: 753577 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="RQHitFp8" Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 507DFBC for ; Tue, 12 Dec 2023 16:36:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702427786; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4J3yyMVT0Ck6UqVMMggnyCj20M3aIOGukmt1/w28WlA=; b=RQHitFp8fXhxUD5EP4EHvhV7gBSpQZ/SN/nJmxzB4ZpvfKBWIA2F2/c/NhrhtCHAG4GLY1 fY3Hrz+XtTfBxwuezsT4LxEvWnlpgViJ++VBZbAeQGVyAROQvcIGa4ixo5a3ef8n8SMKy2 NQF7nm2XeQOnAuuhTsdy5d1RjpEQSpA= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-496-nIPG_mAZOtSj-ICft5S-pQ-1; Tue, 12 Dec 2023 19:36:23 -0500 X-MC-Unique: nIPG_mAZOtSj-ICft5S-pQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A3BE53833343; Wed, 13 Dec 2023 00:36:22 +0000 (UTC) Received: from dell-r430-03.lab.eng.brq2.redhat.com (dell-r430-03.lab.eng.brq2.redhat.com [10.37.153.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id D2C821121306; Wed, 13 Dec 2023 00:36:20 +0000 (UTC) From: Igor Mammedov To: linux-kernel@vger.kernel.org Cc: Dongli Zhang , linux-acpi@vger.kernel.org, linux-pci@vger.kernel.org, imammedo@redhat.com, mst@redhat.com, rafael@kernel.org, lenb@kernel.org, bhelgaas@google.com, mika.westerberg@linux.intel.com, boris.ostrovsky@oracle.com, joe.jin@oracle.com, stable@vger.kernel.org, Fiona Ebner , Thomas Lamprecht Subject: [RFC 2/2] PCI: acpiphp: slowdown hotplug if hotplugging multiple devices at a time Date: Wed, 13 Dec 2023 01:36:14 +0100 Message-Id: <20231213003614.1648343-3-imammedo@redhat.com> In-Reply-To: <20231213003614.1648343-1-imammedo@redhat.com> References: <20231213003614.1648343-1-imammedo@redhat.com> Precedence: bulk X-Mailing-List: linux-acpi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 previous commit ("PCI: acpiphp: enable slot only if it hasn't been enabled already" introduced a workaround to avoid a race between SCSI_SCAN_ASYNC job and bridge reconfiguration in case of single HBA hotplug. However in virt environment it's possible to pause machine hotplug several HBAs and let machine run. That can hit the same race when 2nd hotplugged HBA will start re-configuring bridge. Do the same thing as SHPC and throttle down hotplug of 2nd and up devices within single hotplug event. Signed-off-by: Igor Mammedov --- drivers/pci/hotplug/acpiphp_glue.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c index 6b11609927d6..30bca2086b24 100644 --- a/drivers/pci/hotplug/acpiphp_glue.c +++ b/drivers/pci/hotplug/acpiphp_glue.c @@ -37,6 +37,7 @@ #include #include #include +#include #include "../pci.h" #include "acpiphp.h" @@ -700,6 +701,7 @@ static void trim_stale_devices(struct pci_dev *dev) static void acpiphp_check_bridge(struct acpiphp_bridge *bridge) { struct acpiphp_slot *slot; + int nr_hp_slots = 0; /* Bail out if the bridge is going away. */ if (bridge->is_going_away) @@ -723,6 +725,10 @@ static void acpiphp_check_bridge(struct acpiphp_bridge *bridge) /* configure all functions */ if (slot->flags != SLOT_ENABLED) { + if (nr_hp_slots) + msleep(1000); + + ++nr_hp_slots; enable_slot(slot, true); } } else {